# Daytona Documentation v0.0.0-dev
# Generated on: 2026-04-16
Daytona API keys authenticate requests to the [Daytona API](https://www.daytona.io/docs/en/tools/api.md). They are used by the Daytona [SDKs](https://www.daytona.io/docs/en/getting-started.md#sdks) and [CLI](https://www.daytona.io/docs/en/getting-started.md#cli) to access and manage resources in your organization.
## Create an API key
Daytona provides options to create API keys in [Daytona Dashboard ↗](https://app.daytona.io/dashboard/keys) or programmatically using the [API](https://www.daytona.io/docs/en/tools/api.md#daytona/tag/api-keys).
1. Navigate to [Daytona Dashboard ↗](https://app.daytona.io/dashboard/keys)
2. Click the **Create Key** button
3. Enter the name of the API key, set the expiration date, and [select permissions](#permissions--scopes)
4. Click **Create** to create the API key
5. Copy the API key to your clipboard
To use the API key in your application, set the `DAYTONA_API_KEY` environment variable. Daytona supports multiple options to configure your environment: [in code](https://www.daytona.io/docs/en/configuration.md#configuration-in-code), [environment variables](https://www.daytona.io/docs/en/configuration.md#environment-variables), [.env file](https://www.daytona.io/docs/en/configuration.md#env-file), and [default values](https://www.daytona.io/docs/en/configuration.md#default-values).
API keys support optional expiration and can be revoked at any time. After creation, you can only retrieve a masked key value when listing keys.
```bash
curl 'https://app.daytona.io/api/api-keys' \
--request POST \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer YOUR_API_KEY' \
--data '{
"name": "My API Key",
"permissions": ["write:sandboxes", "delete:sandboxes"],
"expiresAt": "2030-01-01T00:00:00.000Z"
}'
```
For more information, see the [API](https://www.daytona.io/docs/en/tools/api.md#daytona/tag/api-keys) reference:
> [**Create API key (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona/tag/api-keys/POST/api-keys)
## Permissions & Scopes
| **Resource** | **Scope** | **Description** |
| ------------ | ----------------------- | ------------------------ |
| Sandboxes | **`write:sandboxes`** | Create/modify sandboxes |
| | **`delete:sandboxes`** | Delete sandboxes |
| Snapshots | **`write:snapshots`** | Create/modify snapshots |
| | **`delete:snapshots`** | Delete snapshots |
| Registries | **`write:registries`** | Create/modify registries |
| | **`delete:registries`** | Delete registries |
| Volumes | **`read:volumes`** | View volumes |
| | **`write:volumes`** | Create/modify volumes |
| | **`delete:volumes`** | Delete volumes |
| Audit | **`read:audit_logs`** | View audit logs |
| Regions | **`write:regions`** | Create/modify regions |
| | **`delete:regions`** | Delete regions |
| Runners | **`read:runners`** | View runners |
| | **`write:runners`** | Create/modify runners |
| | **`delete:runners`** | Delete runners |
## List API keys
Daytona provides methods to list all API keys for the current user or organization.
```bash
curl 'https://app.daytona.io/api/api-keys' \
--header 'Authorization: Bearer YOUR_API_KEY'
```
For more information, see the [API](https://www.daytona.io/docs/en/tools/api.md#daytona/tag/api-keys) reference:
> [**list (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona/tag/api-keys/GET/api-keys)
## Get current API key
Daytona provides methods to get details of the API key used to authenticate the current request.
```bash
curl 'https://app.daytona.io/api/api-keys/current' \
--header 'Authorization: Bearer YOUR_API_KEY'
```
For more information, see the [API](https://www.daytona.io/docs/en/tools/api.md#daytona/tag/api-keys) reference:
> [**get current (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona/tag/api-keys/GET/api-keys/current)
## Get API key
Daytona provides methods to get a single API key by name.
```bash
curl 'https://app.daytona.io/api/api-keys/my-api-key' \
--header 'Authorization: Bearer YOUR_API_KEY'
```
For more information, see the [API](https://www.daytona.io/docs/en/tools/api.md#daytona/tag/api-keys) reference:
> [**get (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona/tag/api-keys/GET/api-keys/{name})
## Delete API key
Daytona provides options to delete an API key in [Daytona Dashboard ↗](https://app.daytona.io/dashboard/keys) or programmatically using the API. The key is revoked immediately and cannot be recovered.
1. Navigate to [Daytona Dashboard ↗](https://app.daytona.io/dashboard/keys)
2. Click **Revoke** next to the API key you want to delete
3. Confirm the revocation
```bash
curl 'https://app.daytona.io/api/api-keys/my-api-key' \
--request DELETE \
--header 'Authorization: Bearer YOUR_API_KEY'
```
For more information, see the [API](https://www.daytona.io/docs/en/tools/api.md#daytona/tag/api-keys) reference:
> [**delete (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona/tag/api-keys/DELETE/api-keys/{name})
## Delete API key for user
Daytona provides options for organization admins to delete an API key for a specific user.
```bash
curl 'https://app.daytona.io/api/api-keys/{userId}/my-api-key' \
--request DELETE \
--header 'Authorization: Bearer YOUR_API_KEY'
```
For more information, see the [API](https://www.daytona.io/docs/en/tools/api.md#daytona/tag/api-keys) reference:
> [**delete for user (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona/tag/api-keys/DELETE/api-keys/{userId}/{name})
Daytona provides **full composable computers** — [sandboxes](https://www.daytona.io/docs/en/sandboxes.md) — for AI agents. Daytona platform is organized into multiple plane components, each serving a specific purpose:
- [Interface plane](#interface-plane) provides client interfaces for interacting with Daytona
- [Control plane](#control-plane) orchestrates all sandbox operations
- [Compute plane](#compute-plane) runs and manages sandbox instances
### Interface plane
The interface plane provides client interfaces for users and agents to interact with Daytona. The following components are part of the interface plane and available to all users and agents:
- **SDK**: [Python](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby](https://www.daytona.io/docs/en/ruby-sdk.md), and [Go](https://www.daytona.io/docs/en/go-sdk.md) for programmatic sandbox management
- [CLI](https://www.daytona.io/docs/en/tools/cli.md): command-line interface for direct sandbox operations
- [Dashboard](https://app.daytona.io/dashboard/): web interface for visual sandbox management and monitoring
- [MCP](https://www.daytona.io/docs/en/mcp.md): Model Context Protocol server for AI tool integration
- [SSH](https://www.daytona.io/docs/en/ssh-access.md): secure shell access to running sandboxes
### Control plane
The control plane is the central coordination layer of the Daytona platform. It receives all client requests, manages the full sandbox lifecycle, schedules sandboxes onto runners, and continuously reconciles states across the infrastructure. The control plane includes the following components:
- [API](#api) handles authentication, sandbox lifecycle management, and resource allocation
- [Proxy](#proxy) routes external traffic to sandboxes, enabling direct access to services
- [Snapshot builder](#snapshot-builder) builds and manages sandbox [snapshots](https://www.daytona.io/docs/en/snapshots.md)
- [Sandbox manager](#sandbox-manager) handles sandbox lifecycle management and state reconciliation
#### API
The API is a NestJS-based RESTful service that serves as the primary entry point for all platform operations, managing authentication, sandbox lifecycle, snapshots, volumes, and resource allocation. The [snapshot builder](#snapshot-builder) and [sandbox manager](#sandbox-manager) run as internal processes within the API. The API integrates the following internal services and components:
- **Redis** provides caching, session management, and distributed locking
- **PostgreSQL** serves as the primary persistent store for metadata and configuration
- **Auth0/OIDC provider** authenticates users and services via OpenID Connect. The API enforces organization-level multi-tenancy, where each sandbox, snapshot, and volume belongs to an organization, and access control is applied at the organization boundary
- **SMTP server** handles email delivery for organization invitations, account notifications, and alert messages
- [Sandbox manager](#sandbox-manager) schedules sandboxes onto runners, reconciles states, and enforces sandbox lifecycle management policies
- **PostHog** collects platform analytics and usage metrics for monitoring and improvement
To interact with sandboxes from the API, see the [API](https://www.daytona.io/docs/en/tools/api.md) and [Toolbox API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox) references.
#### Proxy
The proxy is a dedicated HTTP proxy that routes external traffic to the correct sandbox using host-based routing. Each sandbox is reachable at `{port}-{sandboxId}.{proxy-domain}`, where the port maps to a service running inside the sandbox. The proxy resolves the target runner for a given sandbox, injects authentication headers, and forwards the request. It supports both HTTP and WebSocket protocols.
#### Snapshot builder
The snapshot builder is part of the API process and orchestrates the creation of sandbox [snapshots](https://www.daytona.io/docs/en/snapshots.md) from a Dockerfile or a pre-built image from a [container registry](#container-registry). It coordinates with runners to build or pull images, which are then pushed to an internal snapshot registry that implements the OCI distribution specification.
#### Sandbox manager
The sandbox manager is part of the API process and schedules sandboxes onto runners, reconciles states, and enforces [sandbox lifecycle management](https://www.daytona.io/docs/en/sandboxes.md#sandbox-lifecycle) policies.
### Compute plane
The compute plane is the infrastructure layer where sandboxes run. Sandboxes run on [runners](#sandbox-runners), compute nodes that host multiple sandboxes with dedicated resources and scale horizontally across shared or dedicated [regions](https://www.daytona.io/docs/en/regions.md). The compute plane consists of the following components:
- [Sandbox runners](#sandbox-runners) host sandboxes with dedicated resources
- [Sandbox daemon](#sandbox-daemon) provides code execution and environment access inside each sandbox
- [Snapshot store](#snapshot-store) stores sandbox snapshot images
- [Volumes](#volumes) provides persistent storage shared across sandboxes
#### Sandbox runners
Runners are compute nodes that power Daytona's compute plane, providing the underlying infrastructure for running sandbox workloads. Each runner polls the control plane API for jobs and executes sandbox operations: creating, starting, stopping, destroying, resizing, and backing up sandboxes. Runners interact with S3-compatible object storage for snapshot and volume data, and with the internal snapshot registry.
Each sandbox runs as an isolated instance with its own Linux namespaces for processes, network, filesystem mounts, and inter-process communication. Each runner allocates dedicated vCPU, RAM, and disk resources per sandbox.
#### Sandbox daemon
The sandbox daemon is a code execution agent that runs inside each sandbox. It exposes the [Toolbox API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox), providing direct access to the sandbox environment: file system and Git operations, process and code execution, computer use, log streaming, and terminal sessions.
#### Snapshot store
The snapshot store is an internal OCI-compliant registry that stores sandbox snapshot images using the OCI distribution specification. Runners pull snapshot images from this store when creating new sandboxes. The store uses S3-compatible object storage as its backend.
#### Volumes
[Volumes](https://www.daytona.io/docs/en/volumes.md) provide persistent storage that can be shared across sandboxes. Each volume is backed by S3-compatible object storage and mounted into sandboxes as a read-write directory. Multiple sandboxes can mount the same volume simultaneously, allowing data to be shared across sandboxes and persist independently of the sandbox lifecycle.
### Container registry
Container registries serve as the source for sandbox base images. When creating a [snapshot](https://www.daytona.io/docs/en/snapshots.md), the snapshot builder pulls the specified image from an external registry, and pushes it to the internal snapshot registry for use by runners. For Dockerfile-based snapshots, parent images referenced in `FROM` directives are also pulled from the configured source registries during the build. Daytona supports any OCI-compatible registry:
- [Docker Hub](https://www.daytona.io/docs/en/snapshots.md#docker-hub)
- [Google Artifact Registry](https://www.daytona.io/docs/en/snapshots.md#google-artifact-registry)
- [GitHub Container Registry (GHCR)](https://www.daytona.io/docs/en/snapshots.md#github-container-registry-ghcr)
- [Private registries](https://www.daytona.io/docs/en/snapshots.md#using-images-from-private-registries): any registry that implements the OCI distribution specification
Daytona audit logs provide a detailed record of user and system activity across your organization. Use this feature to track sandbox lifecycle events, user access, system changes, and more.
- **Security audits**: monitor for unauthorized access or sandbox misuse
- **Debugging**: understand sandbox lifecycle issues (e.g. failed starts)
- **Compliance Export**: export logs for internal or external audits (coming soon)
Audit logs are available to [admins](https://www.daytona.io/docs/en/organizations.md#organization-roles) with full access and [members](https://www.daytona.io/docs/en/organizations.md#organization-roles) with audit log permissions. Contact your organization administrator if you cannot access audit logs.
## Access from Dashboard
Access the audit logs page directly from [Daytona Dashboard ↗](https://app.daytona.io/dashboard/audit-logs). The audit logs page displays a list of all audit logs for your organization, including the following columns:
- **Time**: the timestamp of the action
- **User**: the user who performed the action
- [Actions](#actions): the action performed
- [Targets](#targets): the resource affected by the action
- [Outcomes](#outcomes): the result of the action
To filter audit logs by time, use the date range picker in the top-left corner of the page.
## Real-time updates
Daytona provides real-time updates of audit logs. Enable the **Auto Refresh** toggle in the top-right corner of the [Daytona Audit Logs ↗](https://app.daytona.io/dashboard/audit-logs) page to automatically refresh logs as new events occur.
## Programmatic management
Daytona provides API endpoints for programmatic access to audit logs.
### Get all audit logs
To get all audit logs, use the following API endpoint:
```bash
curl https://app.daytona.io/api/audit \
--header 'Authorization: Bearer YOUR_API_KEY'
```
For more information, see the [API](https://www.daytona.io/docs/en/tools/api.md#daytona/tag/audit) reference:
> [**Get all audit logs (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona/tag/audit/GET/audit-logs)
### Get audit logs for organization
To get audit logs for a specific organization, use the following API endpoint:
```bash
curl https://app.daytona.io/api/audit/organizations/{organizationId} \
--header 'Authorization: Bearer YOUR_API_KEY'
```
For more information, see the [API](https://www.daytona.io/docs/en/tools/api.md#daytona/tag/audit) reference:
> [**Get audit logs for organization (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona/tag/audit/GET/audit/organizations/{organizationId})
## Log Structure
Each audit log entry contains the following fields:
| Field | Type | Description |
| -------------------- | ------ | ---------------------------------------------------- |
| **`id`** | string | Unique log entry identifier |
| **`actorId`** | string | ID of the user who performed the action |
| **`actorEmail`** | string | Email of the user who performed the action |
| **`organizationId`** | string | Organization ID |
| **`action`** | string | Operation executed (e.g., `create`, `start`, `stop`) |
| **`targetType`** | string | Resource type affected (e.g., `sandbox`, `snapshot`) |
| **`targetId`** | string | ID of the affected resource |
| **`statusCode`** | number | HTTP status code of the result |
| **`errorMessage`** | string | Error message if the action failed |
| **`ipAddress`** | string | IP address of the request origin |
| **`userAgent`** | string | User agent of the request origin |
| **`source`** | string | Source of the action |
| **`metadata`** | object | Additional context about the action |
| **`createdAt`** | string | ISO 8601 timestamp of when the action occurred |
## Actions
Below is the complete list of actions logged by Daytona:
```text
create, read, update, delete, login,
set_default, update_access, update_quota, update_region_quota,
suspend, unsuspend, accept, decline,
link_account, unlink_account, leave_organization,
regenerate_key_pair, update_scheduling,
start, stop, replace_labels, create_backup,
update_public_status, set_auto_stop_interval,
set_auto_archive_interval, set_auto_delete_interval, archive,
snapshot, fork,
get_port_preview_url, set_general_status, activate, deactivate,
update_network_settings,
send_webhook_message, initialize_webhooks,
update_sandbox_default_limited_network_egress,
create_ssh_access, revoke_ssh_access,
regenerate_proxy_api_key,regenerate_ssh_gateway_api_key,regenerate_snapshot_manager_credentials
```
## Targets
Each action targets a specific resource type. Possible targets include:
```text
api_key, organization, organization_invitation,
organization_role, organization_user, docker_registry,
runner, sandbox, snapshot, user, volume
```
## Outcomes
The outcome field indicates the result of the action. Statuses follow standard HTTP semantics:
| **Outcome** | **Description** |
| ----------- | ----------------------------- |
| Info | Informational (1xx codes) |
| Success | Action succeeded (2xx codes) |
| Redirect | Redirects (3xx codes) |
| Error | Client/server error (4xx/5xx) |
Daytona provides an overview of your organization's [wallet](#wallet) and [spending](#spending). Daytona uses a pay-as-you-go billing model where you are charged based on the resources your sandboxes consume. For information on resource quotas, rate limits, and tier-based access, see [limits](https://www.daytona.io/docs/en/limits.md).
## Wallet
[Daytona Wallet ↗](https://app.daytona.io/dashboard/billing/wallet) shows the current balance of the organization's wallet and the amount of credits spent this month.
### Overview
Overview provides a summary of your organization's wallet, including the current balance and the amount of credits spent this month, with options to add a [payment method](#payment-method) and [redeem coupon](#redeem-coupon). The amounts for current balance and credits spent this month are displayed in USD.
#### Payment method
Payment method connects your wallet to your preferred payment method, allowing you to add funds to your balance and receive invoices.
1. Navigate to [Daytona Wallet ↗](https://app.daytona.io/dashboard/billing/wallet)
2. Click the **Connect** button in the **Payment method** section
3. Follow the prompts to connect your payment method to your wallet
Organizations can set automatic top-up rules for their wallets
- **Threshold**: when the wallet balance drops to this amount, a top-up is triggered
- **Target**: the wallet balance is topped up to this amount
Set both **Threshold** and **Target** to `0` to disable automatic top-up.
#### Redeem coupon
Redeem coupon allows you to redeem coupon codes to add credits to your wallet.
1. Navigate to [Daytona Wallet ↗](https://app.daytona.io/dashboard/billing/wallet)
2. Enter the coupon code in the **Redeem coupon** input field
3. Click the **Redeem** button to redeem the coupon code
### One time top-up
One time top-up allows you to add credits to your balance with a one time payment.
1. Navigate to [Daytona Wallet ↗](https://app.daytona.io/dashboard/billing/wallet)
2. Select the top-up amount or enter a custom amount
3. Click the **Top up** button
4. Follow the prompts to complete the payment
After completing the payment, the amount will be added to your wallet and the **Current balance** will be updated.
### Invoices
Invoices are automatically generated and sent to your billing emails.
- **Invoice**: the invoice identifier
- **Date**: the date the invoice was issued
- **Due date**: the date the invoice is due
- **Amount**: the amount of the invoice
- **Status**: the status of the invoice
- **Type**: the type of the invoice
1. Navigate to [Daytona Wallet ↗](https://app.daytona.io/dashboard/billing/wallet)
2. Click the three dots button (**:::**) next to the invoice you want to view
3. Click the **View** button to see the invoice details
4. Optionally, download the invoice
## Spending
### Resource usage
Resource usage provides a summary of the organization's resource usage.
- **Total cost**: the total cost of your organization's usage
- **Sandboxes**: the total number of sandboxes in your organization
- **CPU**: the total CPU usage of your organization
- **RAM**: the total RAM usage of your organization
- **Disk**: the total disk usage of your organization
### Resource breakdown
Resource breakdown displays a breakdown of usage per resource.
- **CPU**: the total CPU usage of your organization
- **RAM**: the total RAM usage of your organization
- **Disk**: the total disk usage of your organization
### Per-sandbox usage
Per-sandbox usage displays usage per sandbox.
- **Sandbox ID**: the ID of the sandbox
- **Total price**: the total price of the sandbox's resources usage
- **CPU (seconds)**: the total CPU usage of the sandbox
- **RAM (GB-seconds)**: the total RAM usage of the sandbox
- **Disk (GB-seconds)**: the total disk usage of the sandbox
### Monthly breakdown
Monthly breakdown displays a chart of cost breakdown by month. The chart is interactive and you can filter by resources, change the chart type (bar or area), and select the time range (last 3 months, last 6 months, last 12 months).
## Cancellation & post-cancellation
:::note
Refer to [Daytona Terms of Service ↗](https://www.daytona.io/terms-of-service) for more information.
:::
When you delete your [organization](https://www.daytona.io/docs/en/organizations.md), cancel your subscription, or disable billing, you remain responsible for any sandbox usage that occurred before your action.
### Charges after cancellation
There is a delay of up to 48 hours between when sandbox resources are consumed and when the corresponding charges appear in the billing system. If you cancel during this window, charges for usage that already occurred may still post to your account. These charges reflect sandbox activity that happened before your cancellation and are not charges for new usage.
No charges will be asserted for usage first reported more than 48 hours after cancellation. In no event will any charge be asserted more than 30 calendar days after cancellation, regardless of the cause of any delay.
### Before cancelling
You are responsible for deleting all sandboxes and verifying that no active resources remain before cancelling.
1. Navigate to your organization's [Daytona Dashboard ↗](https://app.daytona.io/dashboard)
2. Delete all sandboxes across all projects
3. Confirm no active resources remain under your organization
Daytona will not charge you for resources that failed to delete due to a platform issue on Daytona's side.
### Final settlement
After cancellation, Daytona sends a final billing summary to the billing email address(es) on file within 5 business days, itemizing any charges posted during the 48-hour settlement window.
### Billing disputes
If you believe a post-cancellation charge is incorrect, you can submit a billing dispute.
1. Email [support@daytona.io](mailto:support@daytona.io) within 30 days of receiving your final settlement notice
2. Include your organization name and/or ID, and the specific charges in question
Daytona will provide detailed usage records supporting the disputed charges upon request and respond within 15 business days.
Computer Use enables programmatic control of desktop environments within sandboxes. It provides mouse, keyboard, screenshot, screen recording, and display operations for automating GUI interactions and testing desktop applications.
Computer Use and [VNC](https://www.daytona.io/docs/en/vnc-access.md) work together to enable both manual and automated desktop interactions. VNC provides the visual interface for users to manually interact with the desktop, while Computer Use provides the programmatic API for AI agents to automate operations.
Computer Use is available for **Linux**. **Windows** and **macOS** support is currently in private alpha.
:::caution[Private Alpha]
Computer Use for macOS and Windows is currently in private alpha and requires access. To request access, fill out the [Windows](https://docs.google.com/forms/d/e/1FAIpQLSfoK-77-VpfsMubw8F4f1opCxIL1AyJUgnM0ONYup5hZ0RTvQ/viewform?usp=dialog) or [macOS](https://docs.google.com/forms/d/e/1FAIpQLSc9xlGZ49OjWNkyzDPC9Ip3InMRR0ZXY3tcoD-PFQj3ck6gzQ/viewform?usp=sharing&ouid=103304973264148733944) access request form. Our team will review your request and reach out with setup instructions.
:::
- **GUI application testing**: automate interactions with native applications, click buttons, fill forms, and validate UI behavior
- **Visual testing & screenshots**: capture screenshots of applications, compare UI states, and perform visual regression testing
- **Desktop automation**: automate repetitive desktop tasks, file management through GUI, and complex workflows
## Start Computer Use
Start all computer use processes (Xvfb, xfce4, x11vnc, novnc) in the Sandbox.
```python
result = sandbox.computer_use.start()
print("Computer use processes started:", result.message)
```
```typescript
const result = await sandbox.computerUse.start();
console.log('Computer use processes started:', result.message);
```
```ruby
result = sandbox.computer_use.start
puts "Computer use processes started: #{result.message}"
```
```go
err := sandbox.ComputerUse.Start(ctx)
if err != nil {
log.Fatal(err)
}
defer sandbox.ComputerUse.Stop(ctx)
fmt.Println("Computer use processes started")
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/start' \
--request POST
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**start (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#computerusestart)
>
> [**start (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#start)
>
> [**start (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md#start)
>
> [**Start Computer Use Processes (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/POST/computeruse/start)
## Stop Computer Use
Stop all computer use processes in the Sandbox.
```python
result = sandbox.computer_use.stop()
print("Computer use processes stopped:", result.message)
```
```typescript
const result = await sandbox.computerUse.stop();
console.log('Computer use processes stopped:', result.message);
```
```ruby
result = sandbox.computer_use.stop
puts "Computer use processes stopped: #{result.message}"
```
```go
err := sandbox.ComputerUse.Stop(ctx)
if err != nil {
log.Fatal(err)
}
fmt.Println("Computer use processes stopped")
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/stop' \
--request POST
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**stop (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#computerusestop)
>
> [**stop (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#stop)
>
> [**stop (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md#stop)
>
> [**Stop Computer Use Processes (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/POST/computeruse/stop)
## Get status
Get the status of all computer use processes.
```python
response = sandbox.computer_use.get_status()
print("Computer use status:", response.status)
```
```typescript
const status = await sandbox.computerUse.getStatus();
console.log('Computer use status:', status.status);
```
```ruby
response = sandbox.computer_use.status
puts "Computer use status: #{response.status}"
```
```go
status, err := sandbox.ComputerUse.GetStatus(ctx)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Computer use status: %v\n", status["status"])
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/status'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**get_status (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#computeruseget_status)
>
> [**getStatus (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#getstatus)
>
> [**status (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md#status)
>
> [**Get Computer Use status (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/GET/computeruse/status)
## Get process status
Get the status of a specific VNC process.
```python
xvfb_status = sandbox.computer_use.get_process_status("xvfb")
novnc_status = sandbox.computer_use.get_process_status("novnc")
```
```typescript
const xvfbStatus = await sandbox.computerUse.getProcessStatus('xvfb');
const noVncStatus = await sandbox.computerUse.getProcessStatus('novnc');
```
```ruby
xvfb_status = sandbox.computer_use.get_process_status("xvfb")
no_vnc_status = sandbox.computer_use.get_process_status("novnc")
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/process/{processName}/status'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**get_process_status (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#computeruseget_process_status)
>
> [**getProcessStatus (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#getprocessstatus)
>
> [**get_process_status (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md#get_process_status)
>
> [**Get Process Status (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/GET/computeruse/process/{processName}/status)
## Restart process
Restart a specific VNC process.
```python
result = sandbox.computer_use.restart_process("xfce4")
print("XFCE4 process restarted:", result.message)
```
```typescript
const result = await sandbox.computerUse.restartProcess('xfce4');
console.log('XFCE4 process restarted:', result.message);
```
```ruby
result = sandbox.computer_use.restart_process("xfce4")
puts "XFCE4 process restarted: #{result.message}"
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/process/{processName}/restart' \
--request POST
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**restart_process (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#computeruserestart_process)
>
> [**restartProcess (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#restartprocess)
>
> [**restart_process (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md#restart_process)
>
> [**Restart process (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/POST/computeruse/process/{processName}/restart)
## Get process logs
Get logs for a specific VNC process.
```python
logs = sandbox.computer_use.get_process_logs("novnc")
print("NoVNC logs:", logs)
```
```typescript
const logsResp = await sandbox.computerUse.getProcessLogs('novnc');
console.log('NoVNC logs:', logsResp.logs);
```
```ruby
logs = sandbox.computer_use.get_process_logs("novnc")
puts "NoVNC logs: #{logs}"
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/process/{processName}/logs'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**get_process_logs (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#computeruseget_process_logs)
>
> [**getProcessLogs (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#getprocesslogs)
>
> [**get_process_logs (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md#get_process_logs)
>
> [**Get process logs (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/GET/computeruse/process/{processName}/logs)
## Get process errors
Get error logs for a specific VNC process.
```python
errors = sandbox.computer_use.get_process_errors("x11vnc")
print("X11VNC errors:", errors)
```
```typescript
const errorsResp = await sandbox.computerUse.getProcessErrors('x11vnc');
console.log('X11VNC errors:', errorsResp.errors);
```
```ruby
errors = sandbox.computer_use.get_process_errors("x11vnc")
puts "X11VNC errors: #{errors}"
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/process/{processName}/errors'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**get_process_errors (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#computeruseget_process_errors)
>
> [**getProcessErrors (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#getprocesserrors)
>
> [**get_process_errors (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md#get_process_errors)
>
> [**Get process errors (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/GET/computeruse/process/{processName}/errors)
## Mouse operations
### Click
Click the mouse at the specified coordinates.
```python
# Single left click
result = sandbox.computer_use.mouse.click(100, 200)
# Double click
double_click = sandbox.computer_use.mouse.click(100, 200, "left", True)
# Right click
right_click = sandbox.computer_use.mouse.click(100, 200, "right")
```
```typescript
// Single left click
const result = await sandbox.computerUse.mouse.click(100, 200);
// Double click
const doubleClick = await sandbox.computerUse.mouse.click(100, 200, 'left', true);
// Right click
const rightClick = await sandbox.computerUse.mouse.click(100, 200, 'right');
```
```ruby
# Single left click
result = sandbox.computer_use.mouse.click(x: 100, y: 200)
# Double click
double_click = sandbox.computer_use.mouse.click(x: 100, y: 200, button: 'left', double: true)
# Right click
right_click = sandbox.computer_use.mouse.click(x: 100, y: 200, button: 'right')
```
```go
// Single left click
result, err := sandbox.ComputerUse.Mouse().Click(ctx, 100, 200, nil, nil)
if err != nil {
log.Fatal(err)
}
// Double click
doubleClick := true
result, err = sandbox.ComputerUse.Mouse().Click(ctx, 100, 200, nil, &doubleClick)
// Right click
rightButton := "right"
result, err = sandbox.ComputerUse.Mouse().Click(ctx, 100, 200, &rightButton, nil)
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/mouse/click' \
--request POST \
--header 'Content-Type: application/json' \
--data '{
"button": "",
"double": true,
"x": 1,
"y": 1
}'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**click (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#mouseclick)
>
> [**click (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#click)
>
> [**Click (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#MouseService.Click)
>
> [**Mouse click (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/POST/computeruse/mouse/click)
### Move
Move the mouse cursor to the specified coordinates.
```python
result = sandbox.computer_use.mouse.move(100, 200)
print(f"Mouse moved to: {result.x}, {result.y}")
```
```typescript
const result = await sandbox.computerUse.mouse.move(100, 200);
console.log(`Mouse moved to: ${result.x}, ${result.y}`);
```
```ruby
result = sandbox.computer_use.mouse.move(x: 100, y: 200)
puts "Mouse moved to: #{result.x}, #{result.y}"
```
```go
result, err := sandbox.ComputerUse.Mouse().Move(ctx, 100, 200)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Mouse moved to: %v, %v\n", result["x"], result["y"])
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/mouse/move' \
--request POST \
--header 'Content-Type: application/json' \
--data '{
"x": 1,
"y": 1
}'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**move (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#mousemove)
>
> [**move (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#move)
>
> [**Move (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#MouseService.Move)
>
> [**Mouse move (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/POST/computeruse/mouse/move)
### Drag
Drag the mouse from start coordinates to end coordinates.
```python
result = sandbox.computer_use.mouse.drag(50, 50, 150, 150)
print(f"Dragged from {result.from_x},{result.from_y} to {result.to_x},{result.to_y}")
```
```typescript
const result = await sandbox.computerUse.mouse.drag(50, 50, 150, 150);
console.log(`Dragged from ${result.from.x},${result.from.y} to ${result.to.x},${result.to.y}`);
```
```ruby
result = sandbox.computer_use.mouse.drag(start_x: 50, start_y: 50, end_x: 150, end_y: 150)
puts "Dragged from #{result.from_x},#{result.from_y} to #{result.to_x},#{result.to_y}"
```
```go
result, err := sandbox.ComputerUse.Mouse().Drag(ctx, 50, 50, 150, 150, nil)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Dragged to %v, %v\n", result["x"], result["y"])
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/mouse/drag' \
--request POST \
--header 'Content-Type: application/json' \
--data '{
"button": "",
"endX": 1,
"endY": 1,
"startX": 1,
"startY": 1
}'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**drag (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#mousedrag)
>
> [**drag (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#drag)
>
> [**Drag (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#MouseService.Drag)
>
> [**Mouse drag (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/POST/computeruse/mouse/drag)
### Scroll
Scroll the mouse wheel at the specified coordinates.
```python
# Scroll up
scroll_up = sandbox.computer_use.mouse.scroll(100, 200, "up", 3)
# Scroll down
scroll_down = sandbox.computer_use.mouse.scroll(100, 200, "down", 5)
```
```typescript
// Scroll up
const scrollUp = await sandbox.computerUse.mouse.scroll(100, 200, 'up', 3);
// Scroll down
const scrollDown = await sandbox.computerUse.mouse.scroll(100, 200, 'down', 5);
```
```ruby
# Scroll up
scroll_up = sandbox.computer_use.mouse.scroll(x: 100, y: 200, direction: 'up', amount: 3)
# Scroll down
scroll_down = sandbox.computer_use.mouse.scroll(x: 100, y: 200, direction: 'down', amount: 5)
```
```go
// Scroll up
amount := 3
success, err := sandbox.ComputerUse.Mouse().Scroll(ctx, 100, 200, "up", &amount)
if err != nil {
log.Fatal(err)
}
// Scroll down
amount = 5
success, err = sandbox.ComputerUse.Mouse().Scroll(ctx, 100, 200, "down", &amount)
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/mouse/scroll' \
--request POST \
--header 'Content-Type: application/json' \
--data '{
"amount": 1,
"direction": "",
"x": 1,
"y": 1
}'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**scroll (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#mousescroll)
>
> [**scroll (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#scroll)
>
> [**Scroll (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#MouseService.Scroll)
>
> [**Mouse scroll (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/POST/computeruse/mouse/scroll)
### Get position
Get the current mouse cursor position.
```python
position = sandbox.computer_use.mouse.get_position()
print(f"Mouse is at: {position.x}, {position.y}")
```
```typescript
const position = await sandbox.computerUse.mouse.getPosition();
console.log(`Mouse is at: ${position.x}, ${position.y}`);
```
```ruby
position = sandbox.computer_use.mouse.position
puts "Mouse is at: #{position.x}, #{position.y}"
```
```go
position, err := sandbox.ComputerUse.Mouse().GetPosition(ctx)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Mouse is at: %v, %v\n", position["x"], position["y"])
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/mouse/position'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**get_position (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#mouseget_position)
>
> [**getPosition (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#getposition)
>
> [**GetPosition (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#MouseService.GetPosition)
>
> [**Get mouse position (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/GET/computeruse/mouse/position)
## Keyboard operations
### Type
Type the specified text.
```python
sandbox.computer_use.keyboard.type("Hello, World!")
# With delay between characters
sandbox.computer_use.keyboard.type("Slow typing", 100)
```
```typescript
await sandbox.computerUse.keyboard.type('Hello, World!');
// With delay between characters
await sandbox.computerUse.keyboard.type('Slow typing', 100);
```
```ruby
sandbox.computer_use.keyboard.type(text: "Hello, World!")
# With delay between characters
sandbox.computer_use.keyboard.type(text: "Slow typing", delay: 100)
```
```go
err := sandbox.ComputerUse.Keyboard().Type(ctx, "Hello, World!", nil)
if err != nil {
log.Fatal(err)
}
// With delay between characters
delay := 100
err = sandbox.ComputerUse.Keyboard().Type(ctx, "Slow typing", &delay)
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/keyboard/type' \
--request POST \
--header 'Content-Type: application/json' \
--data '{
"delay": 1,
"text": ""
}'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**type (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#keyboardtype)
>
> [**type (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#type)
>
> [**Type (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#KeyboardService.Type)
>
> [**Keyboard type (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/POST/computeruse/keyboard/type)
### Press
Press a key with optional modifiers.
```python
# Press Enter
sandbox.computer_use.keyboard.press("Return")
# Press Ctrl+C
sandbox.computer_use.keyboard.press("c", ["ctrl"])
# Press Ctrl+Shift+T
sandbox.computer_use.keyboard.press("t", ["ctrl", "shift"])
```
```typescript
// Press Enter
await sandbox.computerUse.keyboard.press('Return');
// Press Ctrl+C
await sandbox.computerUse.keyboard.press('c', ['ctrl']);
// Press Ctrl+Shift+T
await sandbox.computerUse.keyboard.press('t', ['ctrl', 'shift']);
```
```ruby
# Press Enter
sandbox.computer_use.keyboard.press(key: "Return")
# Press Ctrl+C
sandbox.computer_use.keyboard.press(key: "c", modifiers: ["ctrl"])
# Press Ctrl+Shift+T
sandbox.computer_use.keyboard.press(key: "t", modifiers: ["ctrl", "shift"])
```
```go
// Press Enter
err := sandbox.ComputerUse.Keyboard().Press(ctx, "Return", nil)
if err != nil {
log.Fatal(err)
}
// Press Ctrl+C
err = sandbox.ComputerUse.Keyboard().Press(ctx, "c", []string{"ctrl"})
// Press Ctrl+Shift+T
err = sandbox.ComputerUse.Keyboard().Press(ctx, "t", []string{"ctrl", "shift"})
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/keyboard/key' \
--request POST \
--header 'Content-Type: application/json' \
--data '{
"key": "",
"modifiers": [
""
]
}'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**press (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#keyboardpress)
>
> [**press (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#press)
>
> [**Press (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#KeyboardService.Press)
>
> [**Keyboard press (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/POST/computeruse/keyboard/press)
### Hotkey
Press a hotkey combination.
```python
# Copy
sandbox.computer_use.keyboard.hotkey("ctrl+c")
# Paste
sandbox.computer_use.keyboard.hotkey("ctrl+v")
# Alt+Tab
sandbox.computer_use.keyboard.hotkey("alt+tab")
```
```typescript
// Copy
await sandbox.computerUse.keyboard.hotkey('ctrl+c');
// Paste
await sandbox.computerUse.keyboard.hotkey('ctrl+v');
// Alt+Tab
await sandbox.computerUse.keyboard.hotkey('alt+tab');
```
```ruby
# Copy
sandbox.computer_use.keyboard.hotkey(keys: "ctrl+c")
# Paste
sandbox.computer_use.keyboard.hotkey(keys: "ctrl+v")
# Alt+Tab
sandbox.computer_use.keyboard.hotkey(keys: "alt+tab")
```
```go
// Copy
err := sandbox.ComputerUse.Keyboard().Hotkey(ctx, "ctrl+c")
if err != nil {
log.Fatal(err)
}
// Paste
err = sandbox.ComputerUse.Keyboard().Hotkey(ctx, "ctrl+v")
// Alt+Tab
err = sandbox.ComputerUse.Keyboard().Hotkey(ctx, "alt+tab")
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/keyboard/hotkey' \
--request POST \
--header 'Content-Type: application/json' \
--data '{
"keys": ""
}'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**hotkey (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#keyboardhotkey)
>
> [**hotkey (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#hotkey)
>
> [**Hotkey (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#KeyboardService.Hotkey)
>
> [**Keyboard hotkey (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/POST/computeruse/keyboard/hotkey)
## Screenshot operations
### Take full screen
Take a screenshot of the entire screen.
```python
screenshot = sandbox.computer_use.screenshot.take_full_screen()
print(f"Screenshot size: {screenshot.width}x{screenshot.height}")
# With cursor visible
with_cursor = sandbox.computer_use.screenshot.take_full_screen(True)
```
```typescript
const screenshot = await sandbox.computerUse.screenshot.takeFullScreen();
console.log(`Screenshot size: ${screenshot.width}x${screenshot.height}`);
// With cursor visible
const withCursor = await sandbox.computerUse.screenshot.takeFullScreen(true);
```
```ruby
screenshot = sandbox.computer_use.screenshot.take_full_screen
puts "Screenshot size: #{screenshot.width}x#{screenshot.height}"
# With cursor visible
with_cursor = sandbox.computer_use.screenshot.take_full_screen(show_cursor: true)
```
```go
screenshot, err := sandbox.ComputerUse.Screenshot().TakeFullScreen(ctx, nil)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Screenshot captured, size: %d bytes\n", *screenshot.SizeBytes)
// With cursor visible
showCursor := true
withCursor, err := sandbox.ComputerUse.Screenshot().TakeFullScreen(ctx, &showCursor)
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/screenshot'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**take_full_screen (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#screenshottake_full_screen)
>
> [**takeFullScreen (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#takefullscreen)
>
> [**TakeFullScreen (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#ScreenshotService.TakeFullScreen)
>
> [**Take screenshot (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/GET/computeruse/screenshot)
### Take region
Take a screenshot of a specific region.
```python
from daytona import ScreenshotRegion
region = ScreenshotRegion(x=100, y=100, width=300, height=200)
screenshot = sandbox.computer_use.screenshot.take_region(region)
print(f"Captured region: {screenshot.region.width}x{screenshot.region.height}")
```
```typescript
const region = { x: 100, y: 100, width: 300, height: 200 };
const screenshot = await sandbox.computerUse.screenshot.takeRegion(region);
console.log(`Captured region: ${screenshot.region.width}x${screenshot.region.height}`);
```
```ruby
region = Daytona::ComputerUse::ScreenshotRegion.new(x: 100, y: 100, width: 300, height: 200)
screenshot = sandbox.computer_use.screenshot.take_region(region: region)
puts "Captured region: #{screenshot.region.width}x#{screenshot.region.height}"
```
```go
region := types.ScreenshotRegion{X: 100, Y: 100, Width: 300, Height: 200}
screenshot, err := sandbox.ComputerUse.Screenshot().TakeRegion(ctx, region, nil)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Captured region: %dx%d\n", screenshot.Width, screenshot.Height)
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/screenshot/region?x=1&y=1&width=1&height=1'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**take_region (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#screenshottake_region)
>
> [**takeRegion (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#takeregion)
>
> [**TakeRegion (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#ScreenshotService.TakeRegion)
>
> [**Take Screenshot Region (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/GET/computeruse/screenshot/region)
### Take compressed
Take a compressed screenshot of the entire screen.
```python
from daytona import ScreenshotOptions
# Default compression
screenshot = sandbox.computer_use.screenshot.take_compressed()
# High quality JPEG
jpeg = sandbox.computer_use.screenshot.take_compressed(
ScreenshotOptions(format="jpeg", quality=95, show_cursor=True)
)
# Scaled down PNG
scaled = sandbox.computer_use.screenshot.take_compressed(
ScreenshotOptions(format="png", scale=0.5)
)
```
```typescript
// Default compression
const screenshot = await sandbox.computerUse.screenshot.takeCompressed();
// High quality JPEG
const jpeg = await sandbox.computerUse.screenshot.takeCompressed({
format: 'jpeg',
quality: 95,
showCursor: true
});
// Scaled down PNG
const scaled = await sandbox.computerUse.screenshot.takeCompressed({
format: 'png',
scale: 0.5
});
```
```ruby
# Default compression
screenshot = sandbox.computer_use.screenshot.take_compressed
# High quality JPEG
jpeg = sandbox.computer_use.screenshot.take_compressed(
options: Daytona::ComputerUse::ScreenshotOptions.new(format: "jpeg", quality: 95, show_cursor: true)
)
# Scaled down PNG
scaled = sandbox.computer_use.screenshot.take_compressed(
options: Daytona::ComputerUse::ScreenshotOptions.new(format: "png", scale: 0.5)
)
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/screenshot/compressed'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**take_compressed (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#screenshottake_compressed)
>
> [**takeCompressed (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#takecompressed)
>
> [**Take compressed screenshot (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/GET/computeruse/screenshot/compressed)
### Take compressed region
Take a compressed screenshot of a specific region.
```python
from daytona import ScreenshotRegion, ScreenshotOptions
region = ScreenshotRegion(x=0, y=0, width=800, height=600)
screenshot = sandbox.computer_use.screenshot.take_compressed_region(
region,
ScreenshotOptions(format="webp", quality=80, show_cursor=True)
)
print(f"Compressed size: {screenshot.size_bytes} bytes")
```
```typescript
const region = { x: 0, y: 0, width: 800, height: 600 };
const screenshot = await sandbox.computerUse.screenshot.takeCompressedRegion(region, {
format: 'webp',
quality: 80,
showCursor: true
});
console.log(`Compressed size: ${screenshot.size_bytes} bytes`);
```
```ruby
region = Daytona::ComputerUse::ScreenshotRegion.new(x: 0, y: 0, width: 800, height: 600)
screenshot = sandbox.computer_use.screenshot.take_compressed_region(
region: region,
options: Daytona::ComputerUse::ScreenshotOptions.new(format: "webp", quality: 80, show_cursor: true)
)
puts "Compressed size: #{screenshot.size_bytes} bytes"
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/screenshot/region/compressed?x=1&y=1&width=1&height=1'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**take_compressed_region (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#screenshottake_compressed_region)
>
> [**takeCompressedRegion (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#takecompressedregion)
>
> [**Take compressed screenshot region (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/GET/computeruse/screenshot/region/compressed)
## Screen Recording
Computer Use supports screen recording capabilities, allowing you to capture desktop sessions for debugging, documentation, or automation workflows.
### Configure Recording Directory
By default, recordings are saved to `~/.daytona/recordings`. You can specify a custom directory by passing the `DAYTONA_RECORDINGS_DIR` environment variable when creating a sandbox:
```python
from daytona import Daytona, CreateSandboxFromSnapshotParams
daytona = Daytona()
sandbox = daytona.create(
CreateSandboxFromSnapshotParams(
snapshot="daytonaio/sandbox:0.6.0",
name="my-sandbox",
env_vars={"DAYTONA_RECORDINGS_DIR": "/home/daytona/my-recordings"}
)
)
```
```typescript
import { Daytona } from '@daytona/sdk';
const daytona = new Daytona();
const sandbox = await daytona.create({
snapshot: 'daytonaio/sandbox:0.6.0',
name: 'my-sandbox',
envVars: { DAYTONA_RECORDINGS_DIR: '/home/daytona/my-recordings' }
});
```
```ruby
require 'daytona'
daytona = Daytona::Client.new
sandbox = daytona.create(
snapshot: 'daytonaio/sandbox:0.6.0',
name: 'my-sandbox',
env_vars: { DAYTONA_RECORDINGS_DIR: '/home/daytona/my-recordings' }
)
```
```go
import (
"github.com/daytonaio/daytona/pkg/client"
"github.com/daytonaio/daytona/pkg/types"
)
daytona := client.New()
envVars := map[string]string{
"DAYTONA_RECORDINGS_DIR": "/home/daytona/my-recordings",
}
sandbox, err := daytona.Create(ctx, &types.CreateSandboxParams{
Snapshot: "daytonaio/sandbox:0.6.0",
Name: "my-sandbox",
EnvVars: envVars,
})
if err != nil {
log.Fatal(err)
}
```
### Start Recording
Start a new screen recording session with an optional name identifier:
```python
# Start recording with a custom name
recording = sandbox.computer_use.recording.start("test-1")
print(f"Recording started: {recording.id}")
print(f"File path: {recording.file_path}")
```
```typescript
// Start recording with a custom name
const recording = await sandbox.computerUse.recording.start('test-1');
console.log(`Recording started: ${recording.id}`);
console.log(`File path: ${recording.file_path}`);
```
```ruby
# Start recording with a custom label
recording = sandbox.computer_use.recording.start(label: 'test-1')
puts "Recording started: #{recording.id}"
puts "File path: #{recording.file_path}"
```
```go
// Start recording with a custom name
name := "test-1"
recording, err := sandbox.ComputerUse.Recording().Start(ctx, &name)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Recording started: %s\n", *recording.Id)
fmt.Printf("File path: %s\n", *recording.FilePath)
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/recordings/start' \
--request POST \
--header 'Content-Type: application/json' \
--data '{
"name": "test-1"
}'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**start (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#recordingstart)
>
> [**start (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#start-1)
>
> [**start (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md)
>
> [**Start (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#RecordingService.Start)
>
> [**Start Recording (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/POST/computeruse/recordings/start)
### Stop Recording
Stop an active recording session by providing the recording ID:
```python
# Stop the recording
stopped_recording = sandbox.computer_use.recording.stop(recording.id)
print(f"Recording stopped: {stopped_recording.duration_seconds} seconds")
print(f"Saved to: {stopped_recording.file_path}")
```
```typescript
// Stop the recording
const stoppedRecording = await sandbox.computerUse.recording.stop(recording.id);
console.log(`Recording stopped: ${stoppedRecording.duration_seconds} seconds`);
console.log(`Saved to: ${stoppedRecording.file_path}`);
```
```ruby
# Stop the recording
stopped_recording = sandbox.computer_use.recording.stop(id: recording.id)
puts "Recording stopped: #{stopped_recording.duration_seconds} seconds"
puts "Saved to: #{stopped_recording.file_path}"
```
```go
// Stop the recording
stoppedRecording, err := sandbox.ComputerUse.Recording().Stop(ctx, *recording.Id)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Recording stopped: %f seconds\n", *stoppedRecording.DurationSeconds)
fmt.Printf("Saved to: %s\n", *stoppedRecording.FilePath)
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/recordings/stop' \
--request POST \
--header 'Content-Type: application/json' \
--data '{
"id": "recording-id"
}'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**stop (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#recordingstop)
>
> [**stop (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#stop-1)
>
> [**stop (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md)
>
> [**Stop (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#RecordingService.Stop)
>
> [**Stop Recording (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/POST/computeruse/recordings/stop)
### List Recordings
Get a list of all recordings in the sandbox:
```python
recordings_list = sandbox.computer_use.recording.list()
print(f"Total recordings: {len(recordings_list.recordings)}")
for rec in recordings_list.recordings:
print(f"- {rec.name}: {rec.duration_seconds}s ({rec.file_size_bytes} bytes)")
```
```typescript
const recordingsList = await sandbox.computerUse.recording.list();
console.log(`Total recordings: ${recordingsList.recordings.length}`);
recordingsList.recordings.forEach(rec => {
console.log(`- ${rec.name}: ${rec.duration_seconds}s (${rec.file_size_bytes} bytes)`);
});
```
```ruby
recordings_list = sandbox.computer_use.recording.list
puts "Total recordings: #{recordings_list.recordings.length}"
recordings_list.recordings.each do |rec|
puts "- #{rec.name}: #{rec.duration_seconds}s (#{rec.file_size_bytes} bytes)"
end
```
```go
recordingsList, err := sandbox.ComputerUse.Recording().List(ctx)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Total recordings: %d\n", len(recordingsList.Recordings))
for _, rec := range recordingsList.Recordings {
fmt.Printf("- %s: %.2fs (%d bytes)\n", *rec.Name, *rec.DurationSeconds, *rec.FileSizeBytes)
}
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/recordings'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**list (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#recordinglist)
>
> [**list (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#list)
>
> [**list (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md)
>
> [**List (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#RecordingService.List)
>
> [**List Recordings (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/GET/computeruse/recordings)
### Get Recording
Get details about a specific recording:
```python
recording_detail = sandbox.computer_use.recording.get("recording-id")
print(f"Recording: {recording_detail.name}")
print(f"Status: {recording_detail.status}")
print(f"Duration: {recording_detail.duration_seconds}s")
```
```typescript
const recordingDetail = await sandbox.computerUse.recording.get('recording-id');
console.log(`Recording: ${recordingDetail.name}`);
console.log(`Status: ${recordingDetail.status}`);
console.log(`Duration: ${recordingDetail.duration_seconds}s`);
```
```ruby
recording_detail = sandbox.computer_use.recording.get(id: 'recording-id')
puts "Recording: #{recording_detail.name}"
puts "Status: #{recording_detail.status}"
puts "Duration: #{recording_detail.duration_seconds}s"
```
```go
recordingDetail, err := sandbox.ComputerUse.Recording().Get(ctx, "recording-id")
if err != nil {
log.Fatal(err)
}
fmt.Printf("Recording: %s\n", *recordingDetail.Name)
fmt.Printf("Status: %s\n", *recordingDetail.Status)
fmt.Printf("Duration: %.2fs\n", *recordingDetail.DurationSeconds)
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/recordings/{id}'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**get (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#recordingget)
>
> [**get (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#get)
>
> [**get (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md)
>
> [**Get (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#RecordingService.Get)
>
> [**Get Recording (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/GET/computeruse/recordings/{id})
### Delete Recording
Delete a recording by ID:
```python
sandbox.computer_use.recording.delete("recording-id")
print("Recording deleted successfully")
```
```typescript
await sandbox.computerUse.recording.delete('recording-id');
console.log('Recording deleted successfully');
```
```ruby
sandbox.computer_use.recording.delete(id: 'recording-id')
puts 'Recording deleted successfully'
```
```go
err := sandbox.ComputerUse.Recording().Delete(ctx, "recording-id")
if err != nil {
log.Fatal(err)
}
fmt.Println("Recording deleted successfully")
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/recordings/{id}' \
--request DELETE
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**delete (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#recordingdelete)
>
> [**delete (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#delete)
>
> [**delete (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md)
>
> [**Delete (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#RecordingService.Delete)
>
> [**Delete Recording (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/DELETE/computeruse/recordings/{id})
### Download Recording
Download a recording file from the sandbox to your local machine. The file is streamed efficiently without loading the entire content into memory, making it suitable for large recordings.
```python
# Download recording to local file
sandbox.computer_use.recording.download(recording.id, "local_recording.mp4")
print("Recording downloaded successfully")
# Or with custom path
import os
download_path = os.path.join("recordings", f"recording_{recording.id}.mp4")
sandbox.computer_use.recording.download(recording.id, download_path)
```
```typescript
// Download recording to local file
await sandbox.computerUse.recording.download(recording.id, 'local_recording.mp4');
console.log('Recording downloaded successfully');
// Or with custom path
const downloadPath = `recordings/recording_${recording.id}.mp4`;
await sandbox.computerUse.recording.download(recording.id, downloadPath);
```
```ruby
# Download recording to local file
sandbox.computer_use.recording.download(id: recording.id, local_path: 'local_recording.mp4')
puts 'Recording downloaded successfully'
# Or with custom path
download_path = "recordings/recording_#{recording.id}.mp4"
sandbox.computer_use.recording.download(id: recording.id, local_path: download_path)
```
```go
// Download recording to local file
err := sandbox.ComputerUse.Recording().Download(ctx, recording.GetId(), "local_recording.mp4")
if err != nil {
log.Fatal(err)
}
fmt.Println("Recording downloaded successfully")
// Or with custom path
downloadPath := fmt.Sprintf("recordings/recording_%s.mp4", recording.GetId())
err = sandbox.ComputerUse.Recording().Download(ctx, recording.GetId(), downloadPath)
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/recordings/{id}/download' \
--output local_recording.mp4
```
:::tip[Streaming Downloads]
All SDK implementations stream the recording file directly to disk without loading the entire content into memory. This allows you to download large recordings (hundreds of MB or even GB) efficiently without running out of memory.
- **Python**: Streams in 64KB chunks using `httpx`
- **TypeScript**: Uses Node.js `pipeline()` with backpressure handling
- **Ruby**: Uses Typhoeus streaming with `on_body` callbacks
- **Go**: Uses `io.Copy()` with 32KB internal buffer
:::
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**download (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#recordingdownload)
>
> [**download (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#download)
>
> [**download (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/computer-use.md)
>
> [**Download (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#RecordingService.Download)
>
> [**Download Recording (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/GET/computeruse/recordings/{id}/download)
### Recording Dashboard
Every sandbox includes a built-in recording dashboard for managing screen recordings through a web interface. The dashboard allows you to view, download, and delete recordings without writing code.
To access the recording dashboard:
1. Navigate to your sandboxes in the Daytona Dashboard
2. Click the action menu (three dots) for your sandbox
3. Select **Screen Recordings** from the dropdown menu
The recording dashboard provides:
- List of all recordings with metadata (name, duration, file size, creation time)
- Playback controls for reviewing recordings
- Download functionality to save recordings locally
- Delete options for managing storage
:::tip
The recording dashboard runs on a private port and is automatically secured. No additional authentication is required once you access it through the Daytona Dashboard.
:::
## Display operations
### Get info
Get information about the displays.
```python
info = sandbox.computer_use.display.get_info()
print(f"Primary display: {info.primary_display.width}x{info.primary_display.height}")
print(f"Total displays: {info.total_displays}")
for i, display in enumerate(info.displays):
print(f"Display {i}: {display.width}x{display.height} at {display.x},{display.y}")
```
```typescript
const info = await sandbox.computerUse.display.getInfo();
console.log(`Primary display: ${info.primary_display.width}x${info.primary_display.height}`);
console.log(`Total displays: ${info.total_displays}`);
info.displays.forEach((display, index) => {
console.log(`Display ${index}: ${display.width}x${display.height} at ${display.x},${display.y}`);
});
```
```ruby
info = sandbox.computer_use.display.info
puts "Primary display: #{info.primary_display.width}x#{info.primary_display.height}"
puts "Total displays: #{info.total_displays}"
info.displays.each_with_index do |display, i|
puts "Display #{i}: #{display.width}x#{display.height} at #{display.x},#{display.y}"
end
```
```go
info, err := sandbox.ComputerUse.Display().GetInfo(ctx)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Displays: %v\n", info["displays"])
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/display/info'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**get_info (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#displayget_info)
>
> [**getInfo (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#getinfo)
>
> [**GetInfo (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#DisplayService.GetInfo)
>
> [**Get display info (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/GET/computeruse/display/info)
### Get windows
Get the list of open windows.
```python
windows = sandbox.computer_use.display.get_windows()
print(f"Found {windows.count} open windows:")
for window in windows.windows:
print(f"- {window.title} (ID: {window.id})")
```
```typescript
const windows = await sandbox.computerUse.display.getWindows();
console.log(`Found ${windows.count} open windows:`);
windows.windows.forEach(window => {
console.log(`- ${window.title} (ID: ${window.id})`);
});
```
```ruby
windows = sandbox.computer_use.display.windows
puts "Found #{windows.count} open windows:"
windows.windows.each do |window|
puts "- #{window.title} (ID: #{window.id})"
end
```
```go
result, err := sandbox.ComputerUse.Display().GetWindows(ctx)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Open windows: %v\n", result["windows"])
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/computeruse/display/windows'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use) references:
> [**get_windows (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/computer-use.md#displayget_windows)
>
> [**getWindows (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/computer-use.md#getwindows)
>
> [**GetWindows (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#DisplayService.GetWindows)
>
> [**Get Windows (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/computer-use/GET/computeruse/display/windows)
Daytona supports multiple methods to configure your environment, in order of precedence:
1. [Configuration in code](#configuration-in-code)
2. [Environment variables](#environment-variables)
3. [.env file](#env-file)
4. [Default values](#default-values)
## Configuration in code
To configure your environment in code, use the `DaytonaConfig` class. The `DaytonaConfig` class accepts the following parameters:
- `api_key`: Your Daytona [API Key](https://www.daytona.io/docs/api-keys.md)
- `api_url`: URL of your [Daytona API](https://www.daytona.io/docs/en/tools/api.md)
- `target`: Target region to create the Sandboxes on (`us` / `eu`)
```python
from daytona import DaytonaConfig
config = DaytonaConfig(
api_key="your-api-key",
api_url="your-api-url",
target="us"
)
```
```typescript
import { DaytonaConfig } from '@daytona/sdk'
const config: DaytonaConfig = {
apiKey: 'your-api-key',
apiUrl: 'your-api-url',
target: 'us',
}
```
```ruby
require 'daytona'
config = Daytona::Config.new(
api_key: 'your-api-key',
api_url: 'your-api-url',
target: 'us'
)
```
```go
package main
import (
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona"
)
func main() {
config := daytona.Config{
APIKey: "your-api-key",
APIURL: "your-api-url",
Target: "us",
}
client := daytona.NewClient(&config)
_ = client
}
```
```bash
curl https://app.daytona.io/api/api-keys \
--request POST \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer YOUR_SECRET_TOKEN' \
--data '{
"name": "",
"permissions": [
"write:registries"
],
"expiresAt": ""
}'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/daytona.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/daytona.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/daytona.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk/daytona.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona/tag/api-keys) references:
> [**DaytonaConfig (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/daytona.md#daytonaconfig)
>
> [**DaytonaConfig (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/daytona.md#daytonaconfig)
>
> [**Config (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/config.md#config)
>
> [**DaytonaConfig (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/types.md#DaytonaConfig)
>
> [**Config (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona/tag/api-keys/POST/api-keys)
## Environment variables
Daytona supports environment variables for configuration. The SDK automatically looks for these environment variables:
| Variable | Description | Required |
| --------------------- | ------------------------------------------ | -------- |
| **`DAYTONA_API_KEY`** | Your Daytona API key. | Yes |
| **`DAYTONA_API_URL`** | URL of your Daytona API. | No |
| **`DAYTONA_TARGET`** | Daytona Target to create the sandboxes on. | No |
### Shell
Set environment variables in your shell using the following methods:
```bash
export DAYTONA_API_KEY=your-api-key
export DAYTONA_API_URL=https://your-api-url
export DAYTONA_TARGET=us
```
```bash
$env:DAYTONA_API_KEY="your-api-key"
$env:DAYTONA_API_URL="https://your-api-url"
$env:DAYTONA_TARGET="us"
```
### .env file
Set the environment variables in a `.env` file using the following format:
```bash
DAYTONA_API_KEY=YOUR_API_KEY
DAYTONA_API_URL=https://your_api_url
DAYTONA_TARGET=us
```
## Default values
If no configuration is provided, Daytona will use its built-in default values:
| **Option** | **Value** |
| ---------- | ----------------------------------- |
| API URL | https://app.daytona.io/api |
| Target | Default region for the organization |
Daytona provides a preview proxy service that can be used to handle [preview URLs](https://www.daytona.io/docs/en/preview.md) for sandboxes. This gives you full control over the preview experience, including custom domains, authentication, error handling, and styling.
- **Custom domain**: host your proxy under your own domain (e.g., `preview.yourcompany.com`)
- **User authentication**: implement custom authentication logic for private previews
- **Sandbox management**: automatically start stopped sandboxes before forwarding users
- **Custom error pages**: style error pages to match your brand
- **Preview warning control**: disable Daytona's preview warning
- **CORS management**: override Daytona's default CORS settings
## How it works
When a user visits a preview URL, your custom proxy receives the request and can:
1. Authenticate the user using custom logic
2. Check sandbox status and start it if needed
3. Forward the request to the actual sandbox
4. Handle responses with custom styling and error pages
5. Send custom headers to control Daytona's behavior
Your proxy should forward the `X-Forwarded-Host` header with the original request host when proxying requests to Daytona.
## WebSocket support
The preview proxy fully supports WebSocket connections. WebSocket upgrade requests (`Upgrade: websocket`) are automatically detected and proxied. WebSocket connections skip the preview warning page.
## Reserved ports
The following ports are reserved for internal services and always require authentication, even on public sandboxes:
| Port | Service |
| ----------- | ----------------------------------------- |
| **`22222`** | [**Web** terminal](https://www.daytona.io/docs/en/web-terminal.md) |
| **`2280`** | Toolbox (IDE/development interface) |
| **`33333`** | Recording dashboard |
Your custom proxy should avoid exposing these ports unless you explicitly need access to these services.
## Proxy headers
Your proxy can send special headers to control Daytona's behavior.
### Disable preview warning
To disable Daytona's preview warning page, send:
```
X-Daytona-Skip-Preview-Warning: true
```
The warning page is only shown to browser requests. It sets a `daytona-preview-page-accepted` cookie that persists for 24 hours after acceptance.
### Disable CORS
Daytona's default CORS policy allows all origins with credentials. To override this and use your own CORS settings, send:
```
X-Daytona-Disable-CORS: true
```
### Disable last activity update
To prevent sandbox last activity updates when previewing, set the `X-Daytona-Skip-Last-Activity-Update` header to `true`. This prevents Daytona from keeping sandboxes that have [auto-stop enabled](https://www.daytona.io/docs/en/sandboxes.md#auto-stop-interval) in a started state:
```bash
curl -H "X-Daytona-Skip-Last-Activity-Update: true" \
https://3000-sandbox-123456.proxy.daytona.work
```
### Authentication
For private preview links, send:
```
X-Daytona-Preview-Token: {sandboxToken}
```
The `sandboxToken` can be fetched through the Daytona SDK or API using the [standard preview URL](https://www.daytona.io/docs/en/preview.md#standard-preview-url) methods.
## Examples
Examples of custom preview proxies are available on [GitHub](https://github.com/daytonaio/daytona-proxy-samples).
Declarative Builder provides a powerful, code-first approach to defining dependencies for Daytona Sandboxes. Instead of importing images from a container registry, you can programmatically define them using the Daytona SDK.
The declarative builder system supports two primary workflows:
1. [**Declarative images**](#declarative-image-building): build images with varying dependencies _on demand_ when creating sandboxes
2. [**Pre-built Snapshots**](#creating-pre-built-snapshots): create and register _ready-to-use_ [Snapshots](https://www.daytona.io/docs/snapshots.md) that can be shared across multiple sandboxes
## Build declarative images
Daytona provides an option to create declarative images on-the-fly when creating sandboxes. This is ideal for iterating quickly without creating separate snapshots.
Declarative images are cached for 24 hours, and are automatically reused when running the same script. Thus, subsequent runs on the same runner will be almost instantaneous.
```python
# Define a declarative image with python packages
declarative_image = (
Image.debian_slim("3.12")
.pip_install(["requests", "pytest"])
.workdir("/home/daytona")
)
# Create a new sandbox with the declarative image and stream the build logs
sandbox = daytona.create(
CreateSandboxFromImageParams(image=declarative_image),
timeout=0,
on_snapshot_create_logs=print,
)
```
```typescript
// Define a declarative image with python packages
const declarativeImage = Image.debianSlim('3.12')
.pipInstall(['requests', 'pytest'])
.workdir('/home/daytona')
// Create a new sandbox with the declarative image and stream the build logs
const sandbox = await daytona.create(
{
image: declarativeImage,
},
{
timeout: 0,
onSnapshotCreateLogs: console.log,
}
)
```
```ruby
# Define a simple declarative image with Python packages
declarative_image = Daytona::Image
.debian_slim('3.12')
.pip_install(['requests', 'pytest'])
.workdir('/home/daytona')
# Create a new Sandbox with the declarative image and stream the build logs
sandbox = daytona.create(
Daytona::CreateSandboxFromImageParams.new(image: declarative_image),
on_snapshot_create_logs: proc { |chunk| puts chunk }
)
```
```go
// Define a declarative image with python packages
version := "3.12"
declarativeImage := daytona.DebianSlim(&version).
PipInstall([]string{"requests", "pytest"}).
Workdir("/home/daytona")
// Create a new sandbox with the declarative image and stream the build logs
logChan := make(chan string)
go func() {
for log := range logChan {
fmt.Print(log)
}
}()
sandbox, err := client.Create(ctx, types.ImageParams{
Image: declarativeImage,
}, options.WithTimeout(0), options.WithLogChannel(logChan))
if err != nil {
// handle error
}
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk.md), and [Go SDK](https://www.daytona.io/docs/en/go-sdk.md) references:
> [**CreateSandboxFromImageParams (Python SDK)**](https://www.daytona.io/docs/python-sdk/sync/daytona.md#createsandboxfromimageparams)
>
> [**CreateSandboxFromImageParams (TypeScript SDK)**](https://www.daytona.io/docs/typescript-sdk/daytona.md#createsandboxfromimageparams)
>
> [**CreateSandboxFromImageParams (Ruby SDK)**](https://www.daytona.io/docs/ruby-sdk/daytona.md#createsandboxfromimageparams)
>
> [**Create (Go SDK)**](https://www.daytona.io/docs/go-sdk/daytona.md#Client.Create)
:::note
Use the following best practices when working with the declarative builder:
- **Layer Optimization**: Group related operations to minimize Docker layers
- **Cache Utilization**: Identical build commands and context will be cached and subsequent builds will be almost instant
- **Security**: Create non-root users for application workloads
- **Resource Efficiency**: Use slim base images when appropriate
- **Context Minimization**: Only include necessary files in the build context
:::
## Create pre-built Snapshots
Daytona provides an option to [create pre-built snapshots](https://www.daytona.io/docs/snapshots.md#create-snapshots) that can be reused across multiple sandboxes.
The snapshot remains visible in the [Daytona Dashboard ↗](https://app.daytona.io/dashboard/snapshots) and is permanently cached, ensuring instant availability without rebuilding.
```python
# Create a python data science image
snapshot_name = "data-science-snapshot"
image = (
Image.debian_slim("3.12")
.pip_install(["pandas", "numpy"])
.workdir("/home/daytona")
)
# Create the snapshot and stream the build logs
daytona.snapshot.create(
CreateSnapshotParams(
name=snapshot_name,
image=image,
),
on_logs=print,
)
# Create a new sandbox using the pre-built snapshot
sandbox = daytona.create(
CreateSandboxFromSnapshotParams(snapshot=snapshot_name)
)
```
```typescript
// Create a python data science image
const snapshotName = 'data-science-snapshot'
const image = Image.debianSlim('3.12')
.pipInstall(['pandas', 'numpy'])
.workdir('/home/daytona')
// Create the snapshot and stream the build logs
await daytona.snapshot.create(
{
name: snapshotName,
image,
},
{
onLogs: console.log,
}
)
// Create a new sandbox using the pre-built snapshot
const sandbox = await daytona.create({
snapshot: snapshotName,
})
```
```ruby
# Create a simple Python data science image
snapshot_name = 'data-science-snapshot'
image = Daytona::Image
.debian_slim('3.12')
.pip_install(['pandas', 'numpy'])
.workdir('/home/daytona')
# Create the Snapshot and stream the build logs
daytona.snapshot.create(
Daytona::CreateSnapshotParams.new(
name: snapshot_name,
image: image
),
on_logs: proc { |chunk| puts chunk }
)
# Create a new Sandbox using the pre-built Snapshot
sandbox = daytona.create(
Daytona::CreateSandboxFromSnapshotParams.new(snapshot: snapshot_name)
)
```
```go
// Create a python data science image
snapshotName := "data-science-snapshot"
version := "3.12"
image := daytona.DebianSlim(&version).
PipInstall([]string{"pandas", "numpy"}).
Workdir("/home/daytona")
// Create the snapshot and stream the build logs
_, logChan, err := client.Snapshot.Create(ctx, &types.CreateSnapshotParams{
Name: snapshotName,
Image: image,
})
if err != nil {
// handle error
}
for log := range logChan {
fmt.Print(log)
}
// Create a new sandbox using the pre-built snapshot
sandbox, err := client.Create(ctx, types.SnapshotParams{
Snapshot: snapshotName,
})
if err != nil {
// handle error
}
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk.md), and [Go SDK](https://www.daytona.io/docs/en/go-sdk.md) references:
> [**CreateSnapshotParams (Python SDK)**](https://www.daytona.io/docs/python-sdk/sync/snapshot.md#createsnapshotparams)
>
> [**CreateSnapshotParams (TypeScript SDK)**](https://www.daytona.io/docs/typescript-sdk/snapshot.md#createsnapshotparams)
>
> [**CreateSnapshotParams (Ruby SDK)**](https://www.daytona.io/docs/ruby-sdk/snapshot.md#createsnapshotparams)
>
> [**Create (Go SDK)**](https://www.daytona.io/docs/go-sdk/daytona.md#SnapshotService.Create)
## Image configuration
Daytona provides an option to define images programmatically using the Daytona SDK. You can specify base images, install packages, add files, set environment variables, and more.
For a complete API reference and method signatures, see the [Python](https://www.daytona.io/docs/python-sdk/common/image.md), [TypeScript](https://www.daytona.io/docs/typescript-sdk/image.md), [Ruby](https://www.daytona.io/docs/ruby-sdk/image.md), and [Go](https://www.daytona.io/docs/go-sdk/daytona.md#type-DockerImage) SDK references.
### Base image selection
Daytona provides an option to select base images. The following snippets demonstrate how to select and configure base images:
```python
# Create an image from a base
image = Image.base("python:3.12-slim-bookworm")
# Use a Debian slim image with Python 3.12
image = Image.debian_slim("3.12")
```
```typescript
// Create an image from a base
const image = Image.base('python:3.12-slim-bookworm')
// Use a Debian slim image with Python 3.12
const image = Image.debianSlim('3.12')
```
```ruby
# Create an image from a base
image = Daytona::Image.base('python:3.12-slim-bookworm')
# Use a Debian slim image with Python 3.12
image = Daytona::Image.debian_slim('3.12')
```
```go
// Create an image from a base
image := daytona.Base("python:3.12-slim-bookworm")
// Use a Debian slim image with Python 3.12
version := "3.12"
image := daytona.DebianSlim(&version)
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk.md), and [Go SDK](https://www.daytona.io/docs/en/go-sdk.md) references:
> [**base (Python SDK)**](https://www.daytona.io/docs/python-sdk/common/image.md#imagebase)
>
> [**base (TypeScript SDK)**](https://www.daytona.io/docs/typescript-sdk/image.md#base)
>
> [**Base (Go SDK)**](https://www.daytona.io/docs/go-sdk/daytona.md#Base)
>
> [**debian_slim (Python SDK)**](https://www.daytona.io/docs/python-sdk/common/image.md#imagedebian_slim)
>
> [**debianSlim (TypeScript SDK)**](https://www.daytona.io/docs/typescript-sdk/image.md#debianslim)
>
> [**DebianSlim (Go SDK)**](https://www.daytona.io/docs/go-sdk/daytona.md#DebianSlim)
### Package management
Daytona provides an option to install packages and dependencies to your image.
The following snippets demonstrate how to install packages and dependencies to your image:
```python
# Add pip packages
image = Image.debian_slim("3.12").pip_install(["requests", "pandas"])
# Install from requirements.txt
image = Image.debian_slim("3.12").pip_install_from_requirements("requirements.txt")
# Install from pyproject.toml (with optional dependencies)
image = Image.debian_slim("3.12").pip_install_from_pyproject("pyproject.toml", optional_dependencies=["dev"])
```
```typescript
// Add pip packages
const image = Image.debianSlim('3.12').pipInstall(['requests', 'pandas'])
// Install from requirements.txt
const image = Image.debianSlim('3.12').pipInstallFromRequirements('requirements.txt')
// Install from pyproject.toml (with optional dependencies)
const image = Image.debianSlim('3.12').pipInstallFromPyproject('pyproject.toml', {
optionalDependencies: ['dev']
})
```
```ruby
# Add pip packages
image = Daytona::Image.debian_slim('3.12').pip_install(['requests', 'pandas'])
# Install from requirements.txt
image = Daytona::Image.debian_slim('3.12').pip_install_from_requirements('requirements.txt')
# Install from pyproject.toml (with optional dependencies)
image = Daytona::Image.debian_slim('3.12').pip_install_from_pyproject('pyproject.toml',
optional_dependencies: ['dev']
)
```
```go
// Add pip packages
version := "3.12"
image := daytona.DebianSlim(&version).PipInstall([]string{"requests", "pandas"})
// Install from requirements.txt
image := daytona.DebianSlim(&version).
AddLocalFile("requirements.txt", "/tmp/requirements.txt").
Run("pip install -r /tmp/requirements.txt")
// Install from pyproject.toml (with optional dependencies)
image := daytona.DebianSlim(&version).
AddLocalFile("pyproject.toml", "/tmp/pyproject.toml").
Run("pip install /tmp[dev]")
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk.md), and [Go SDK](https://www.daytona.io/docs/en/go-sdk.md) references:
> [**pip_install (Python SDK)**](https://www.daytona.io/docs/python-sdk/common/image.md#imagepip_install)
>
> [**pipInstall (TypeScript SDK)**](https://www.daytona.io/docs/typescript-sdk/image.md#pipinstall)
>
> [**pip_install (Ruby SDK)**](https://www.daytona.io/docs/ruby-sdk/image.md#pip_install)
>
> [**PipInstall (Go SDK)**](https://www.daytona.io/docs/go-sdk/daytona.md#DockerImage.PipInstall)
>
> [**pip_install_from_requirements (Python SDK)**](https://www.daytona.io/docs/python-sdk/common/image.md#imagepip_install_from_requirements)
>
> [**pipInstallFromRequirements (TypeScript SDK)**](https://www.daytona.io/docs/typescript-sdk/image.md#pipinstallfromrequirements)
>
> [**pip_install_from_requirements (Ruby SDK)**](https://www.daytona.io/docs/ruby-sdk/image.md#pip_install_from_requirements)
>
> [**pip_install_from_pyproject (Python SDK)**](https://www.daytona.io/docs/python-sdk/common/image.md#imagepip_install_from_pyproject)
>
> [**pipInstallFromPyproject (TypeScript SDK)**](https://www.daytona.io/docs/typescript-sdk/image.md#pipinstallfrompyproject)
>
> [**pip_install_from_pyproject (Ruby SDK)**](https://www.daytona.io/docs/ruby-sdk/image.md#pip_install_from_pyproject)
### File system operations
Daytona provides an option to add files and directories to your image.
The following snippets demonstrate how to add files and directories to your image:
```python
# Add a local file
image = Image.debian_slim("3.12").add_local_file("package.json", "/home/daytona/package.json")
# Add a local directory
image = Image.debian_slim("3.12").add_local_dir("src", "/home/daytona/src")
```
```typescript
// Add a local file
const image = Image.debianSlim('3.12').addLocalFile('package.json', '/home/daytona/package.json')
// Add a local directory
const image = Image.debianSlim('3.12').addLocalDir('src', '/home/daytona/src')
```
```ruby
# Add a local file
image = Daytona::Image.debian_slim('3.12').add_local_file('package.json', '/home/daytona/package.json')
# Add a local directory
image = Daytona::Image.debian_slim('3.12').add_local_dir('src', '/home/daytona/src')
```
```go
// Add a local file
version := "3.12"
image := daytona.DebianSlim(&version).AddLocalFile("package.json", "/home/daytona/package.json")
// Add a local directory
image := daytona.DebianSlim(&version).AddLocalDir("src", "/home/daytona/src")
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk.md), and [Go SDK](https://www.daytona.io/docs/en/go-sdk.md) references:
> [**add_local_file (Python SDK)**](https://www.daytona.io/docs/python-sdk/common/image.md#imageadd_local_file)
>
> [**add_local_dir (Python SDK)**](https://www.daytona.io/docs/python-sdk/common/image.md#imageadd_local_dir)
>
> [**addLocalFile (TypeScript SDK)**](https://www.daytona.io/docs/typescript-sdk/image.md#addlocalfile)
>
> [**addLocalDir (TypeScript SDK)**](https://www.daytona.io/docs/typescript-sdk/image.md#addlocaldir)
>
> [**add_local_file (Ruby SDK)**](https://www.daytona.io/docs/ruby-sdk/image.md#add_local_file)
>
> [**add_local_dir (Ruby SDK)**](https://www.daytona.io/docs/ruby-sdk/image.md#add_local_dir)
>
> [**AddLocalFile (Go SDK)**](https://www.daytona.io/docs/go-sdk/daytona.md#DockerImage.AddLocalFile)
>
> [**AddLocalDir (Go SDK)**](https://www.daytona.io/docs/go-sdk/daytona.md#DockerImage.AddLocalDir)
### Environment configuration
Daytona provides an option to configure environment variables and working directories.
The following snippets demonstrate how to configure environment variables and working directories:
```python
# Set environment variables
image = Image.debian_slim("3.12").env({"PROJECT_ROOT": "/home/daytona"})
# Set working directory
image = Image.debian_slim("3.12").workdir("/home/daytona")
```
```typescript
// Set environment variables
const image = Image.debianSlim('3.12').env({ PROJECT_ROOT: '/home/daytona' })
// Set working directory
const image = Image.debianSlim('3.12').workdir('/home/daytona')
```
```ruby
# Set environment variables
image = Daytona::Image.debian_slim('3.12').env({ 'PROJECT_ROOT' => '/home/daytona' })
# Set working directory
image = Daytona::Image.debian_slim('3.12').workdir('/home/daytona')
```
```go
// Set environment variables
version := "3.12"
image := daytona.DebianSlim(&version).Env("PROJECT_ROOT", "/home/daytona")
// Set working directory
image := daytona.DebianSlim(&version).Workdir("/home/daytona")
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk.md), and [Go SDK](https://www.daytona.io/docs/en/go-sdk.md) references:
> [**env (Python SDK)**](https://www.daytona.io/docs/python-sdk/common/image.md#imageenv)
>
> [**workdir (Python SDK)**](https://www.daytona.io/docs/python-sdk/common/image.md#imageworkdir)
>
> [**env (TypeScript SDK)**](https://www.daytona.io/docs/typescript-sdk/image.md#env)
>
> [**workdir (TypeScript SDK)**](https://www.daytona.io/docs/typescript-sdk/image.md#workdir)
>
> [**env (Ruby SDK)**](https://www.daytona.io/docs/ruby-sdk/image.md#env)
>
> [**workdir (Ruby SDK)**](https://www.daytona.io/docs/ruby-sdk/image.md#workdir)
>
> [**Env (Go SDK)**](https://www.daytona.io/docs/go-sdk/daytona.md#DockerImage.Env)
>
> [**Workdir (Go SDK)**](https://www.daytona.io/docs/go-sdk/daytona.md#DockerImage.Workdir)
### Commands and entrypoints
Daytona provides an option to execute commands during build and configure container startup behavior.
The following snippets demonstrate how to execute commands during build and configure container startup behavior:
```python
# Run shell commands during build
image = Image.debian_slim("3.12").run_commands(
'apt-get update && apt-get install -y git',
'groupadd -r daytona && useradd -r -g daytona -m daytona',
'mkdir -p /home/daytona/workspace'
)
# Set entrypoint
image = Image.debian_slim("3.12").entrypoint(["/bin/bash"])
# Set default command
image = Image.debian_slim("3.12").cmd(["/bin/bash"])
```
```typescript
// Run shell commands during build
const image = Image.debianSlim('3.12').runCommands(
'apt-get update && apt-get install -y git',
'groupadd -r daytona && useradd -r -g daytona -m daytona',
'mkdir -p /home/daytona/workspace'
)
// Set entrypoint
const image = Image.debianSlim('3.12').entrypoint(['/bin/bash'])
// Set default command
const image = Image.debianSlim('3.12').cmd(['/bin/bash'])
```
```ruby
# Run shell commands during build
image = Daytona::Image.debian_slim('3.12').run_commands(
'apt-get update && apt-get install -y git',
'groupadd -r daytona && useradd -r -g daytona -m daytona',
'mkdir -p /home/daytona/workspace'
)
# Set entrypoint
image = Daytona::Image.debian_slim('3.12').entrypoint(['/bin/bash'])
# Set default command
image = Daytona::Image.debian_slim('3.12').cmd(['/bin/bash'])
```
```go
// Run shell commands during build
version := "3.12"
image := daytona.DebianSlim(&version).
Run("apt-get update && apt-get install -y git").
Run("groupadd -r daytona && useradd -r -g daytona -m daytona").
Run("mkdir -p /home/daytona/workspace")
// Set entrypoint
image := daytona.DebianSlim(&version).Entrypoint([]string{"/bin/bash"})
// Set default command
image := daytona.DebianSlim(&version).Cmd([]string{"/bin/bash"})
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk.md), and [Go SDK](https://www.daytona.io/docs/en/go-sdk.md) references:
> [**run_commands (Python SDK)**](https://www.daytona.io/docs/python-sdk/common/image.md#imagerun_commands)
>
> [**entrypoint (Python SDK)**](https://www.daytona.io/docs/python-sdk/common/image.md#imageentrypoint)
>
> [**cmd (Python SDK)**](https://www.daytona.io/docs/python-sdk/common/image.md#imagecmd)
>
> [**runCommands (TypeScript SDK)**](https://www.daytona.io/docs/typescript-sdk/image.md#runcommands)
>
> [**entrypoint (TypeScript SDK)**](https://www.daytona.io/docs/typescript-sdk/image.md#entrypoint)
>
> [**cmd (TypeScript SDK)**](https://www.daytona.io/docs/typescript-sdk/image.md#cmd)
>
> [**run_commands (Ruby SDK)**](https://www.daytona.io/docs/ruby-sdk/image.md#run_commands)
>
> [**entrypoint (Ruby SDK)**](https://www.daytona.io/docs/ruby-sdk/image.md#entrypoint)
>
> [**cmd (Ruby SDK)**](https://www.daytona.io/docs/ruby-sdk/image.md#cmd)
>
> [**Run (Go SDK)**](https://www.daytona.io/docs/go-sdk/daytona.md#DockerImage.Run)
>
> [**Entrypoint (Go SDK)**](https://www.daytona.io/docs/go-sdk/daytona.md#DockerImage.Entrypoint)
>
> [**Cmd (Go SDK)**](https://www.daytona.io/docs/go-sdk/daytona.md#DockerImage.Cmd)
### Dockerfile integration
Daytona provides an option to integrate existing Dockerfiles or add custom Dockerfile commands.
The following snippets demonstrate how to integrate existing Dockerfiles or add custom Dockerfile commands:
```python
# Add custom Dockerfile commands
image = Image.debian_slim("3.12").dockerfile_commands(["RUN echo 'Hello, world!'"])
# Use an existing Dockerfile
image = Image.from_dockerfile("Dockerfile")
# Extend an existing Dockerfile
image = Image.from_dockerfile("app/Dockerfile").pip_install(["numpy"])
```
```typescript
// Add custom Dockerfile commands
const image = Image.debianSlim('3.12').dockerfileCommands(['RUN echo "Hello, world!"'])
// Use an existing Dockerfile
const image = Image.fromDockerfile('Dockerfile')
// Extend an existing Dockerfile
const image = Image.fromDockerfile("app/Dockerfile").pipInstall(['numpy'])
```
```ruby
# Add custom Dockerfile commands
image = Daytona::Image.debian_slim('3.12').dockerfile_commands(['RUN echo "Hello, world!"'])
# Use an existing Dockerfile
image = Daytona::Image.from_dockerfile('Dockerfile')
# Extend an existing Dockerfile
image = Daytona::Image.from_dockerfile('app/Dockerfile').pip_install(['numpy'])
```
```go
// Note: In Go, FromDockerfile takes the Dockerfile content as a string
content, err := os.ReadFile("Dockerfile")
if err != nil {
// handle error
}
image := daytona.FromDockerfile(string(content))
// Extend an existing Dockerfile with additional commands
content, err = os.ReadFile("app/Dockerfile")
if err != nil {
// handle error
}
image := daytona.FromDockerfile(string(content)).
PipInstall([]string{"numpy"})
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk.md), and [Go SDK](https://www.daytona.io/docs/en/go-sdk.md) references:
> [**dockerfile_commands (Python SDK)**](https://www.daytona.io/docs/python-sdk/common/image.md#imagedockerfile_commands)
>
> [**from_dockerfile (Python SDK)**](https://www.daytona.io/docs/python-sdk/common/image.md#imagefrom_dockerfile)
>
> [**dockerfileCommands (TypeScript SDK)**](https://www.daytona.io/docs/typescript-sdk/image.md#dockerfilecommands)
>
> [**fromDockerfile (TypeScript SDK)**](https://www.daytona.io/docs/typescript-sdk/image.md#fromdockerfile)
>
> [**dockerfile_commands (Ruby SDK)**](https://www.daytona.io/docs/ruby-sdk/image.md#dockerfile_commands)
>
> [**FromDockerfile (Go SDK)**](https://www.daytona.io/docs/go-sdk/daytona.md#FromDockerfile)
OpenTelemetry (OTEL) tracing allows you to monitor and debug your Daytona SDK operations by collecting distributed traces. This is particularly useful for understanding performance bottlenecks, debugging issues, and gaining visibility into your sandbox operations.
:::caution
OpenTelemetry collection is currently an experimental feature and may change in future releases. To request access to this feature, please contact [support@daytona.io](mailto:support@daytona.io).
:::
---
## Sandbox Telemetry Collection
Daytona can collect traces, logs, and metrics directly from your sandboxes. This provides complete observability across your entire Daytona environment, from [SDK calls](#sdk-tracing-configuration) to sandbox runtime behavior.
### What Gets Collected from Sandboxes
When sandbox telemetry is enabled, the following data is collected:
**Metrics:**
- `daytona.sandbox.cpu.utilization` - CPU usage percentage (0-100%)
- `daytona.sandbox.cpu.limit` - CPU cores limit
- `daytona.sandbox.memory.utilization` - Memory usage percentage (0-100%)
- `daytona.sandbox.memory.usage` - Memory used in bytes
- `daytona.sandbox.memory.limit` - Memory limit in bytes
- `daytona.sandbox.filesystem.utilization` - Disk usage percentage (0-100%)
- `daytona.sandbox.filesystem.usage` - Disk space used in bytes
- `daytona.sandbox.filesystem.available` - Disk space available in bytes
- `daytona.sandbox.filesystem.total` - Total disk space in bytes
**Traces:**
- HTTP requests and responses
- Custom spans from your application code
**Logs:**
- Application logs (stdout/stderr)
- System logs
- Runtime errors and warnings
### Viewing Telemetry in the Dashboard
Logs, traces, and metrics collected from sandboxes can be viewed directly in the Daytona Dashboard. Open the **Sandbox Details** sheet for any sandbox and use the **Logs**, **Traces**, and **Metrics** tabs to inspect the collected telemetry data.
:::note
Daytona retains sandbox telemetry data for **3 days**. If you need to keep the data for longer, it is recommended that you connect your own OTLP-compatible collector using the [sandbox collection configuration](#configure-sandbox-collection).
:::
:::tip
Sandbox telemetry collection works independently from SDK tracing. You can enable one or both depending on your observability needs:
- **SDK tracing only**: Monitor Daytona API operations and SDK calls
- **Sandbox telemetry only**: Monitor application behavior inside sandboxes
- **Both**: Get complete end-to-end observability across your entire stack
:::
### Configure Sandbox Collection
To enable telemetry collection from sandboxes:
1. Navigate to the [Daytona Dashboard](https://app.daytona.io)
2. Go to **Settings** → **Experimental**
3. Configure the following fields:
- **OTLP Endpoint**: Your OpenTelemetry collector endpoint (e.g., `https://otlp.nr-data.net`)
- **OTLP Headers**: Authentication headers in `key=value` format (e.g., `api-key=YOUR_API_KEY`)
Once configured, all sandboxes will automatically export their telemetry data to your specified OTLP endpoint.
### Resource Labels
All sandbox telemetry is automatically annotated with the following OTel resource attributes:
- `daytona_organization_id` - The organization the sandbox belongs to
- `daytona_region_id` - The region the sandbox is running in
- `daytona_snapshot` - The snapshot used to create the sandbox
#### Custom Resource Labels
You can attach additional resource labels by setting the `DAYTONA_SANDBOX_OTEL_EXTRA_LABELS` environment variable on a sandbox. Labels are specified as a comma-separated list of `key=value` pairs:
```bash
DAYTONA_SANDBOX_OTEL_EXTRA_LABELS="team=backend,env=staging,app=my-service"
```
These labels are added as OTel resource attributes to all traces, logs, and metrics emitted by the sandbox. This is useful for filtering and grouping telemetry data by custom dimensions in your observability platform.
---
## Organization-Level Metrics
In addition to per-sandbox telemetry, Daytona exports organization-level resource metrics to your configured OTLP endpoint. These metrics are pushed every 60 seconds and provide a high-level view of resource consumption and quotas across your organization.
### Exported Metrics
| Metric | Unit | Description |
| --- | --- | --- |
| `daytona.sandbox.used_cpu` | cpu cores | Total CPU currently consumed by active sandboxes |
| `daytona.sandbox.used_ram` | GiB | Total memory currently consumed by active sandboxes |
| `daytona.sandbox.used_storage` | GiB | Total disk currently consumed by sandboxes |
| `daytona.sandbox.total_cpu` | cpu cores | Total CPU quota for the organization |
| `daytona.sandbox.total_ram` | GiB | Total memory quota for the organization |
| `daytona.sandbox.total_storage` | GiB | Total disk quota for the organization |
### Metric Attributes
Each metric includes the following attributes for filtering and grouping:
- **`organization.id`** (resource attribute) — The organization the metrics belong to
- **`region.id`** (data point attribute) — The region the resource usage and quota applies to, since quotas are per-region
### Configuration
Organization metrics are automatically exported when you have a [sandbox collection endpoint configured](#configure-sandbox-collection). No additional setup is required — the same OTLP endpoint receives both sandbox telemetry and organization metrics.
---
## SDK Tracing Configuration
When enabled, the Daytona SDK automatically instruments all SDK operations including:
- Sandbox creation, starting, stopping, and deletion
- File system operations
- Code execution
- Process management
- HTTP requests to the Daytona API
Traces are exported using the OTLP (OpenTelemetry Protocol) format and can be sent to any OTLP-compatible backend such as New Relic, Jaeger, or Zipkin.
### 1. Enable OTEL in SDK
To enable OpenTelemetry tracing, pass the `otelEnabled` experimental flag when initializing the Daytona client:
Alternatively, you can set the `DAYTONA_EXPERIMENTAL_OTEL_ENABLED` environment variable to `true` instead of passing the configuration option:
```bash
export DAYTONA_EXPERIMENTAL_OTEL_ENABLED=true
```
```python
from daytona import Daytona, DaytonaConfig
# Using async context manager (recommended)
async with Daytona(DaytonaConfig(
_experimental={"otelEnabled": True}
)) as daytona:
sandbox = await daytona.create()
# All operations will be traced
# OpenTelemetry traces are flushed on close
```
Or without context manager:
```python
daytona = Daytona(DaytonaConfig(
_experimental={"otelEnabled": True}
))
try:
sandbox = await daytona.create()
# All operations will be traced
finally:
await daytona.close() # Flushes traces
```
```typescript
import { Daytona } from '@daytona/sdk'
// Using async dispose (recommended)
await using daytona = new Daytona({
_experimental: { otelEnabled: true }
})
const sandbox = await daytona.create()
// All operations will be traced
// Traces are automatically flushed on dispose
```
Or with explicit disposal:
```typescript
const daytona = new Daytona({
_experimental: { otelEnabled: true }
})
try {
const sandbox = await daytona.create()
// All operations will be traced
} finally {
await daytona[Symbol.asyncDispose]() // Flushes traces
}
```
```go
import (
"context"
"log"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/types"
)
client, err := daytona.NewClientWithConfig(&types.DaytonaConfig{
Experimental: &types.ExperimentalConfig{
OtelEnabled: true,
},
})
if err != nil {
log.Fatal(err)
}
defer client.Close(context.Background()) // Flushes traces
sandbox, err := client.Create(context.Background(), nil)
// All operations will be traced
```
```ruby
require 'daytona'
config = Daytona::Config.new(
_experimental: { 'otel_enabled' => true }
)
daytona = Daytona::Daytona.new(config)
sandbox = daytona.create
# All operations will be traced
daytona.close # Flushes traces
```
Or with `ensure` block:
```ruby
daytona = Daytona::Daytona.new(
Daytona::Config.new(_experimental: { 'otel_enabled' => true })
)
begin
sandbox = daytona.create
# All operations will be traced
ensure
daytona.close # Flushes traces
end
```
### 2. Configure OTLP Exporter
The SDK uses standard OpenTelemetry environment variables for configuration. Set these before running your application:
#### Required Environment Variables
```bash
# OTLP endpoint (without the /v1/traces path)
OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp.nr-data.net:4317
# Authentication headers (format: key1=value1,key2=value2)
OTEL_EXPORTER_OTLP_HEADERS="api-key=your-api-key-here"
```
---
## Provider-Specific Examples
### New Relic
```bash
OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp.nr-data.net:4317
OTEL_EXPORTER_OTLP_HEADERS="api-key=YOUR_NEW_RELIC_LICENSE_KEY"
```
### Jaeger (Local)
```bash
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
```
### Grafana Cloud
```bash
OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp-gateway-prod-.grafana.net/otlp
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Basic "
```
Setup: Go to [Grafana Cloud Portal](https://grafana.com) → **Connections** → **Add new connection** → Search for **OpenTelemetry (OTLP)** → Follow the wizard to create an access token. The endpoint and headers will be provided in the instrumentation instructions. See the [Grafana dashboard example](https://github.com/daytonaio/daytona/tree/main/examples/otel-dashboards/grafana) for detailed setup steps.
---
## Complete Example
Here's a complete example showing how to use OpenTelemetry tracing with the Daytona SDK:
```python
import asyncio
import os
from daytona import Daytona, DaytonaConfig
# Set OTEL configuration
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://otlp.nr-data.net:4317"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = "api-key=YOUR_API_KEY"
async def main():
# Initialize Daytona with OTEL enabled
async with Daytona(DaytonaConfig(
_experimental={"otelEnabled": True}
)) as daytona:
# Create a sandbox - this operation will be traced
sandbox = await daytona.create()
print(f"Created sandbox: {sandbox.id}")
# Execute code - this operation will be traced
result = await sandbox.process.code_run(""
import numpy as np
print(f"NumPy version: {np.__version__}")
"")
print(f"Execution result: {result.result}")
# Upload a file - this operation will be traced
await sandbox.fs.upload_file("local.txt", "/home/daytona/remote.txt")
# Delete sandbox - this operation will be traced
await daytona.delete(sandbox)
# Traces are automatically flushed when exiting the context manager
if __name__ == "__main__":
asyncio.run(main())
```
```typescript
// Set OTEL configuration
process.env.OTEL_EXPORTER_OTLP_ENDPOINT = "https://otlp.nr-data.net:4317"
process.env.OTEL_EXPORTER_OTLP_HEADERS = "api-key=YOUR_API_KEY"
import { Daytona } from '@daytona/sdk'
async function main() {
// Initialize Daytona with OTEL enabled
await using daytona = new Daytona({
_experimental: { otelEnabled: true }
})
// Create a sandbox - this operation will be traced
const sandbox = await daytona.create()
console.log(`Created sandbox: ${sandbox.id}`)
// Execute code - this operation will be traced
const result = await sandbox.process.codeRun(`
import numpy as np
print(f"NumPy version: {np.__version__}")
`)
console.log(`Execution result: ${result.result}`)
// Upload a file - this operation will be traced
await sandbox.fs.uploadFile('local.txt', '/home/daytona/remote.txt')
// Delete sandbox - this operation will be traced
await daytona.delete(sandbox)
// Traces are automatically flushed when the daytona instance is disposed
}
main().catch(console.error)
```
```go
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/types"
)
func main() {
// Set OTEL configuration
os.Setenv("OTEL_EXPORTER_OTLP_ENDPOINT", "https://otlp.nr-data.net:4317")
os.Setenv("OTEL_EXPORTER_OTLP_HEADERS", "api-key=YOUR_API_KEY")
ctx := context.Background()
// Initialize Daytona with OTEL enabled
client, err := daytona.NewClientWithConfig(&types.DaytonaConfig{
Experimental: &types.ExperimentalConfig{
OtelEnabled: true,
},
})
if err != nil {
log.Fatal(err)
}
defer client.Close(ctx) // Flushes traces on exit
// Create a sandbox - this operation will be traced
sandbox, err := client.Create(ctx, nil)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Created sandbox: %s\n", sandbox.ID)
// Execute code - this operation will be traced
result, err := sandbox.Process.CodeRun(ctx, &types.CodeRunParams{
Code: `
import numpy as np
print(f"NumPy version: {np.__version__}")
`,
})
if err != nil {
log.Fatal(err)
}
fmt.Printf("Execution result: %s\n", result.Result)
// Upload a file - this operation will be traced
err = sandbox.Fs.UploadFile(ctx, "local.txt", "/home/daytona/remote.txt")
if err != nil {
log.Fatal(err)
}
// Delete sandbox - this operation will be traced
err = client.Delete(ctx, sandbox, nil)
if err != nil {
log.Fatal(err)
}
// Traces are flushed when client.Close is called via defer
}
```
```ruby
require 'daytona'
# Set OTEL configuration
ENV["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://otlp.nr-data.net:4317"
ENV["OTEL_EXPORTER_OTLP_HEADERS"] = "api-key=YOUR_API_KEY"
# Initialize Daytona with OTEL enabled
config = Daytona::Config.new(
_experimental: { 'otel_enabled' => true }
)
daytona = Daytona::Daytona.new(config)
begin
# Create a sandbox - this operation will be traced
sandbox = daytona.create
puts "Created sandbox: #{sandbox.id}"
# Execute code - this operation will be traced
result = sandbox.process.code_run("
import numpy as np
print(f'NumPy version: {np.__version__}')
")
puts "Execution result: #{result.result}"
# Upload a file - this operation will be traced
sandbox.fs.upload_file("local.txt", "/home/daytona/remote.txt")
# Delete sandbox - this operation will be traced
daytona.delete(sandbox)
ensure
daytona.close # Flushes traces
end
```
---
## What Gets Traced
The Daytona SDK automatically instruments the following operations:
### SDK-Level Operations
- `create()` - Sandbox creation and initialization
- `get()` - Retrieving sandbox instances
- `list()` - Listing sandboxes
- `start()` - Starting sandboxes
- `stop()` - Stopping sandboxes
- `delete()` - Deleting sandboxes
- All sandbox, snapshot and volume operations (file system, code execution, process management, etc.)
### HTTP Requests
- All API calls to the Daytona backend
- Request duration and response status codes
- Error information for failed requests
### Trace Attributes
Each trace includes valuable metadata such as:
- Service name and version
- HTTP method, URL, and status code
- Request and response duration
- Error details (if applicable)
- Custom SDK operation metadata
---
## Dashboard Examples
- [New Relic](https://github.com/daytonaio/daytona/tree/main/examples/otel-dashboards/new-relic)
- [Grafana](https://github.com/daytonaio/daytona/tree/main/examples/otel-dashboards/grafana)
## Troubleshooting
### Verify Traces Are Being Sent
1. Check that environment variables are set correctly
2. Verify your OTLP endpoint is reachable
3. Confirm API keys/headers are valid
4. Check your observability platform for incoming traces
5. Look for connection errors in application logs
### Common Issues
**Traces not appearing:**
- Ensure `otelEnabled: true` is set in the configuration
- Verify OTLP endpoint and headers are correct
- Check that you're properly closing/disposing the Daytona instance to flush traces
**Connection refused:**
- Verify the OTLP endpoint URL is correct
- Ensure the endpoint is accessible from your application
- Check firewall rules if running in a restricted environment
**Authentication errors:**
- Verify API key format matches your provider's requirements
- Check that the `OTEL_EXPORTER_OTLP_HEADERS` format is correct (key=value pairs)
---
## Best Practices
1. **Always close the client**: Use `async with` (Python), `await using` (TypeScript), `defer client.Close()` (Go), or `ensure daytona.close` (Ruby) to ensure traces are properly flushed
1. **Monitor trace volume**: Be aware that enabling tracing will increase network traffic and storage in your observability platform
1. **Use in development first**: Test OTEL configuration in development before enabling in production
1. **Configure sampling**: For high-volume applications, consider configuring trace sampling to reduce costs
---
## Additional Resources
- [OpenTelemetry Documentation](https://opentelemetry.io/docs/)
- [OTLP Specification](https://opentelemetry.io/docs/specs/otlp/)
- [Daytona SDK Documentation](https://www.daytona.io/docs/en/introduction.md)
Daytona provides comprehensive file system operations through the `fs` module in sandboxes.
## Basic operations
Daytona provides methods to interact with the file system in sandboxes. You can perform various operations like listing files, creating directories, reading and writing files, and more.
File operations assume you are operating in the sandbox user's home directory (e.g. `workspace` implies `/home/[username]/workspace`). Use a leading `/` when providing absolute paths.
### List files and directories
Daytona provides methods to list files and directories in a sandbox by providing the path to the directory. If the path is not provided, the method will list the files and directories in the sandbox working directory.
```python
# List files in a directory
files = sandbox.fs.list_files("workspace")
for file in files:
print(f"Name: {file.name}")
print(f"Is directory: {file.is_dir}")
print(f"Size: {file.size}")
print(f"Modified: {file.mod_time}")
```
```typescript
// List files in a directory
const files = await sandbox.fs.listFiles('workspace')
files.forEach(file => {
console.log(`Name: ${file.name}`)
console.log(`Is directory: ${file.isDir}`)
console.log(`Size: ${file.size}`)
console.log(`Modified: ${file.modTime}`)
})
```
```ruby
# List directory contents
files = sandbox.fs.list_files("workspace/data")
# Print files and their sizes
files.each do |file|
puts "#{file.name}: #{file.size} bytes" unless file.is_dir
end
# List only directories
dirs = files.select(&:is_dir)
puts "Subdirectories: #{dirs.map(&:name).join(', ')}"
```
```go
// List files in a directory
files, err := sandbox.FileSystem.ListFiles(ctx, "workspace")
if err != nil {
log.Fatal(err)
}
for _, file := range files {
fmt.Printf("Name: %s\n", file.Name)
fmt.Printf("Is directory: %t\n", file.IsDirectory)
fmt.Printf("Size: %d\n", file.Size)
fmt.Printf("Modified: %s\n", file.ModifiedTime)
}
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/files'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox) references:
> [**list_files (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/file-system.md#filesystemlist_files)
>
> [**listFiles (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/file-system.md#listfiles)
>
> [**list_files (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/file-system.md#list_files)
>
> [**ListFiles (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#FileSystemService.ListFiles)
>
> [**list files and directories (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/file-system/GET/files)
### Get directory or file information
Daytona provides methods to get directory or file information such as group, directory, modified time, mode, name, owner, permissions, and size by providing the path to the directory or file.
```python
# Get file metadata
info = sandbox.fs.get_file_info("workspace/data/file.txt")
print(f"Size: {info.size} bytes")
print(f"Modified: {info.mod_time}")
print(f"Mode: {info.mode}")
# Check if path is a directory
info = sandbox.fs.get_file_info("workspace/data")
if info.is_dir:
print("Path is a directory")
```
```typescript
// Get file details
const info = await fs.getFileDetails('app/config.json')
console.log(`Size: ${info.size}, Modified: ${info.modTime}`)
```
```ruby
# Get file metadata
info = sandbox.fs.get_file_info("workspace/data/file.txt")
puts "Size: #{info.size} bytes"
puts "Modified: #{info.mod_time}"
puts "Mode: #{info.mode}"
# Check if path is a directory
info = sandbox.fs.get_file_info("workspace/data")
puts "Path is a directory" if info.is_dir
```
```go
// Get file metadata
info, err := sandbox.FileSystem.GetFileInfo(ctx, "workspace/data/file.txt")
if err != nil {
log.Fatal(err)
}
fmt.Printf("Size: %d bytes\n", info.Size)
fmt.Printf("Modified: %s\n", info.ModifiedTime)
fmt.Printf("Mode: %s\n", info.Mode)
// Check if path is a directory
info, err = sandbox.FileSystem.GetFileInfo(ctx, "workspace/data")
if err != nil {
log.Fatal(err)
}
if info.IsDirectory {
fmt.Println("Path is a directory")
}
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/files/info?path='
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md) references:
> [**get_file_info (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/file-system.md#filesystemget_file_info)
>
> [**getFileDetails (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/file-system.md#getfiledetails)
>
> [**get_file_info (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/file-system.md#get_file_info)
>
> [**GetFileInfo (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#FileSystemService.GetFileInfo)
>
> [**get file information (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/file-system/GET/files/info)
### Create directories
Daytona provides methods to create directories by providing the path to the directory and the permissions to set on the directory.
```python
# Create with specific permissions
sandbox.fs.create_folder("workspace/new-dir", "755")
```
```typescript
// Create with specific permissions
await sandbox.fs.createFolder('workspace/new-dir', '755')
```
```ruby
# Create a directory with standard permissions
sandbox.fs.create_folder("workspace/data", "755")
# Create a private directory
sandbox.fs.create_folder("workspace/secrets", "700")
```
```go
// Create with specific permissions
err := sandbox.FileSystem.CreateFolder(ctx, "workspace/new-dir",
options.WithMode("755"),
)
if err != nil {
log.Fatal(err)
}
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/files/folder?path=&mode=' \
--request POST
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md) references:
> [**create_folder (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/file-system.md#filesystemcreate_folder)
>
> [**createFolder (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/file-system.md#createfolder)
>
> [**create_folder (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/file-system.md#create_folder)
>
> [**CreateFolder (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#FileSystemService.CreateFolder)
>
> [**create folder (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/file-system/POST/files/folder)
### Upload files
Daytona provides methods to upload a single or multiple files in sandboxes.
#### Upload a single file
Daytona provides methods to upload a single file in sandboxes by providing the content to upload and the path to the file to upload it to.
```python
# Upload a single file
with open("local_file.txt", "rb") as f:
content = f.read()
sandbox.fs.upload_file(content, "remote_file.txt")
```
```typescript
// Upload a single file
const fileContent = Buffer.from('Hello, World!')
await sandbox.fs.uploadFile(fileContent, 'data.txt')
```
```ruby
# Upload a text file from string content
content = "Hello, World!"
sandbox.fs.upload_file(content, "tmp/hello.txt")
# Upload a local file
sandbox.fs.upload_file("local_file.txt", "tmp/file.txt")
# Upload binary data
data = { key: "value" }.to_json
sandbox.fs.upload_file(data, "tmp/config.json")
```
```go
// Upload from a local file path
err := sandbox.FileSystem.UploadFile(ctx, "local_file.txt", "remote_file.txt")
if err != nil {
log.Fatal(err)
}
// Or upload from byte content
content := []byte("Hello, World!")
err = sandbox.FileSystem.UploadFile(ctx, content, "hello.txt")
if err != nil {
log.Fatal(err)
}
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/files/upload?path=' \
--request POST \
--header 'Content-Type: multipart/form-data' \
--form 'file='
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md) references:
> [**upload_file (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/file-system.md#filesystemupload_file)
>
> [**uploadFile (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/file-system.md#uploadfile)
>
> [**upload_file (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/file-system.md#upload_file)
>
> [**UploadFile (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#FileSystemService.UploadFile)
>
> [**upload file (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/file-system/POST/files/upload)
#### Upload multiple files
Daytona provides methods to upload multiple files in sandboxes by providing the content to upload and their destination paths.
```python
# Upload multiple files at once
files_to_upload = []
with open("file1.txt", "rb") as f1:
files_to_upload.append(FileUpload(
source=f1.read(),
destination="data/file1.txt",
))
with open("file2.txt", "rb") as f2:
files_to_upload.append(FileUpload(
source=f2.read(),
destination="data/file2.txt",
))
with open("settings.json", "rb") as f3:
files_to_upload.append(FileUpload(
source=f3.read(),
destination="config/settings.json",
))
sandbox.fs.upload_files(files_to_upload)
```
```typescript
// Upload multiple files at once
const files = [
{
source: Buffer.from('Content of file 1'),
destination: 'data/file1.txt',
},
{
source: Buffer.from('Content of file 2'),
destination: 'data/file2.txt',
},
{
source: Buffer.from('{"key": "value"}'),
destination: 'config/settings.json',
},
]
await sandbox.fs.uploadFiles(files)
```
```ruby
# Upload multiple files
files = [
FileUpload.new("Content of file 1", "/tmp/file1.txt"),
FileUpload.new("workspace/data/file2.txt", "/tmp/file2.txt"),
FileUpload.new('{"key": "value"}', "/tmp/config.json")
]
sandbox.fs.upload_files(files)
```
```go
// Upload multiple files by calling UploadFile for each
filesToUpload := []struct {
source string
destination string
}{
{"file1.txt", "data/file1.txt"},
{"file2.txt", "data/file2.txt"},
{"settings.json", "config/settings.json"},
}
for _, f := range filesToUpload {
err := sandbox.FileSystem.UploadFile(ctx, f.source, f.destination)
if err != nil {
log.Fatal(err)
}
}
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/files/bulk-upload' \
--request POST
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md) references:
> [**upload_files (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/file-system.md#filesystemupload_files)
>
> [**uploadFiles (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/file-system.md#uploadfiles)
>
> [**upload_files (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/file-system.md#upload_files)
>
> [**upload multiple files (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/file-system/POST/files/bulk-upload)
### Download files
Daytona provides methods to download files from sandboxes.
#### Download a single file
Daytona provides methods to download a single file from sandboxes by providing the path to the file to download.
```python
from daytona import DaytonaNotFoundError
try:
content = sandbox.fs.download_file("file1.txt")
except DaytonaNotFoundError as error:
print(f"Missing file: {error}")
else:
with open("local_file.txt", "wb") as f:
f.write(content)
print(content.decode("utf-8"))
```
```typescript
import { DaytonaNotFoundError } from '@daytona/sdk'
try {
const downloadedFile = await sandbox.fs.downloadFile('file1.txt')
console.log('File content:', downloadedFile.toString())
} catch (error) {
if (error instanceof DaytonaNotFoundError) {
console.error(`Missing file: ${error.message}`)
} else {
throw error
}
}
```
```ruby
# Download and get file content
content = sandbox.fs.download_file("workspace/data/file.txt")
puts content
# Download and save a file locally
sandbox.fs.download_file("workspace/data/file.txt", "local_copy.txt")
size_mb = File.size("local_copy.txt") / 1024.0 / 1024.0
puts "Size of the downloaded file: #{size_mb} MB"
```
```go
// Download and get contents in memory
content, err := sandbox.FileSystem.DownloadFile(ctx, "file1.txt", nil)
if err != nil {
log.Fatal(err)
}
fmt.Println(string(content))
// Download and save to a local file
localPath := "local_file.txt"
content, err = sandbox.FileSystem.DownloadFile(ctx, "file1.txt", &localPath)
if err != nil {
log.Fatal(err)
}
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/files/download?path='
```
In the Python and TypeScript SDKs, `download_file` and `downloadFile` raise typed Daytona exceptions when the daemon returns structured per-file error metadata. Missing files map to not-found errors, invalid paths such as directories map to validation errors, and permission failures map to authorization errors.
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md) references:
> [**download_file (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/file-system.md#filesystemdownload_file)
>
> [**downloadFile (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/file-system.md#downloadfile)
>
> [**download_file (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/file-system.md#download_file)
>
> [**DownloadFile (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#FileSystemService.DownloadFile)
>
> [**download file (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/file-system/GET/files/download)
#### Download multiple files
Daytona provides methods to download multiple files from sandboxes by providing the paths to the files to download.
```python
# Download multiple files at once
files_to_download = [
FileDownloadRequest(source="data/file1.txt"), # No destination - download to memory
FileDownloadRequest(source="data/file2.txt", destination="local_file2.txt"), # Download to local file
]
results = sandbox.fs.download_files(files_to_download)
for result in results:
if result.error:
print(f"Error downloading {result.source}: {result.error}")
if result.error_details:
print(
f" status={result.error_details.status_code} "
f"code={result.error_details.error_code}"
)
elif result.result:
print(f"Downloaded {result.source} to {result.result}")
```
```typescript
// Download multiple files at once
const files = [
{ source: 'data/file1.txt' }, // No destination - download to memory
{ source: 'data/file2.txt', destination: 'local_file2.txt' }, // Download to local file
]
const results = await sandbox.fs.downloadFiles(files)
results.forEach(result => {
if (result.error) {
console.error(`Error downloading ${result.source}: ${result.error}`)
if (result.errorDetails) {
console.error(
` status=${result.errorDetails.statusCode} code=${result.errorDetails.errorCode}`
)
}
} else if (result.result) {
console.log(`Downloaded ${result.source} to ${result.result}`)
}
})
```
```ruby
# Download multiple files by calling download_file for each
files_to_download = [
{ remote: "data/file1.txt", local: nil }, # Download to memory
{ remote: "data/file2.txt", local: "local_file2.txt" } # Download to local file
]
files_to_download.each do |f|
if f[:local]
sandbox.fs.download_file(f[:remote], f[:local])
puts "Downloaded #{f[:remote]} to #{f[:local]}"
else
content = sandbox.fs.download_file(f[:remote])
puts "Downloaded #{f[:remote]} to memory (#{content.size} bytes)"
end
end
```
```go
// Download multiple files by calling DownloadFile for each
filesToDownload := []struct {
remotePath string
localPath *string
}{
{"data/file1.txt", nil}, // Download to memory
{"data/file2.txt", ptrString("local_file2.txt")}, // Download to local file
}
for _, f := range filesToDownload {
content, err := sandbox.FileSystem.DownloadFile(ctx, f.remotePath, f.localPath)
if err != nil {
fmt.Printf("Error downloading %s: %v\n", f.remotePath, err)
continue
}
if f.localPath == nil {
fmt.Printf("Downloaded %s to memory (%d bytes)\n", f.remotePath, len(content))
} else {
fmt.Printf("Downloaded %s to %s\n", f.remotePath, *f.localPath)
}
}
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/files/bulk-download' \
--request POST \
--header 'Content-Type: application/json' \
--data '{
"paths": [
""
]
}'
```
Bulk downloads keep the existing `error` string for compatibility and now also include structured metadata on each failed item:
- Python: `result.error_details.message`, `result.error_details.status_code`, `result.error_details.error_code`
- TypeScript: `result.errorDetails.message`, `result.errorDetails.statusCode`, `result.errorDetails.errorCode`
The toolbox bulk-download API returns successful files as multipart `file` parts and per-file failures as multipart `error` parts with JSON payloads containing `message`, `statusCode`, and `code`.
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md) references:
> [**download_files (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/file-system.md#filesystemdownload_files)
>
> [**downloadFiles (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/file-system.md#downloadfiles)
>
> [**download_file (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/file-system.md#download_file)
>
> [**download multiple files (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/file-system/POST/files/bulk-download)
### Delete files
Daytona provides methods to delete files or directories from sandboxes by providing the path to the file or directory to delete.
```python
sandbox.fs.delete_file("workspace/file.txt")
```
```typescript
await sandbox.fs.deleteFile('workspace/file.txt')
```
```ruby
# Delete a file
sandbox.fs.delete_file("workspace/data/old_file.txt")
# Delete a directory recursively
sandbox.fs.delete_file("workspace/old_dir", recursive: true)
```
```go
// Delete a file
err := sandbox.FileSystem.DeleteFile(ctx, "workspace/file.txt", false)
if err != nil {
log.Fatal(err)
}
// Delete a directory recursively
err = sandbox.FileSystem.DeleteFile(ctx, "workspace/old_dir", true)
if err != nil {
log.Fatal(err)
}
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/files?path=' \
--request DELETE
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md) references:
> [**delete_file (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/file-system.md#filesystemdelete_file)
>
> [**deleteFile (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/file-system.md#deletefile)
>
> [**delete_file (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/file-system.md#delete_file)
>
> [**DeleteFile (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#FileSystemService.DeleteFile)
>
> [**delete file or directory (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/file-system/DELETE/files)
## Advanced operations
Daytona provides advanced file system operations such as file permissions, search and replace, and move files.
### File permissions
Daytona provides methods to set file permissions, ownership, and group for a file or directory by providing the path to the file or directory and the permissions to set.
```python
# Set file permissions
sandbox.fs.set_file_permissions("workspace/file.txt", "644")
# Get file permissions
file_info = sandbox.fs.get_file_info("workspace/file.txt")
print(f"Permissions: {file_info.permissions}")
```
```typescript
// Set file permissions
await sandbox.fs.setFilePermissions('workspace/file.txt', { mode: '644' })
// Get file permissions
const fileInfo = await sandbox.fs.getFileDetails('workspace/file.txt')
console.log(`Permissions: ${fileInfo.permissions}`)
```
```ruby
# Make a file executable
sandbox.fs.set_file_permissions(
path: "workspace/scripts/run.sh",
mode: "755" # rwxr-xr-x
)
# Change file owner
sandbox.fs.set_file_permissions(
path: "workspace/data/file.txt",
owner: "daytona",
group: "daytona"
)
```
```go
// Set file permissions
err := sandbox.FileSystem.SetFilePermissions(ctx, "workspace/file.txt",
options.WithPermissionMode("644"),
)
if err != nil {
log.Fatal(err)
}
// Set owner and group
err = sandbox.FileSystem.SetFilePermissions(ctx, "workspace/file.txt",
options.WithOwner("daytona"),
options.WithGroup("daytona"),
)
if err != nil {
log.Fatal(err)
}
// Get file info to check permissions
fileInfo, err := sandbox.FileSystem.GetFileInfo(ctx, "workspace/file.txt")
if err != nil {
log.Fatal(err)
}
fmt.Printf("Mode: %s\n", fileInfo.Mode)
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/files/permissions?path=' \
--request POST
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md) references:
> [**set_file_permissions (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/file-system.md#filesystemset_file_permissions)
>
> [**setFilePermissions (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/file-system.md#setfilepermissions)
>
> [**set_file_permissions (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/file-system.md#set_file_permissions)
>
> [**SetFilePermissions (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#FileSystemService.SetFilePermissions)
>
> [**set file permissions (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/file-system/POST/files/permissions)
### Find and replace text in files
Daytona provides methods to find and replace text in files by providing the path to the directory to search in and the pattern to search for.
```python
# Search for text in files by providing the path to the directory to search in and the pattern to search for
results = sandbox.fs.find_files(
path="workspace/src",
pattern="text-of-interest"
)
for match in results:
print(f"Absolute file path: {match.file}")
print(f"Line number: {match.line}")
print(f"Line content: {match.content}")
print("\n")
# Replace text in files
sandbox.fs.replace_in_files(
files=["workspace/file1.txt", "workspace/file2.txt"],
pattern="old_text",
new_value="new_text"
)
```
```typescript
// Search for text in files; if a folder is specified, the search is recursive
const results = await sandbox.fs.findFiles({
path="workspace/src",
pattern: "text-of-interest"
})
results.forEach(match => {
console.log('Absolute file path:', match.file)
console.log('Line number:', match.line)
console.log('Line content:', match.content)
})
// Replace text in files
await sandbox.fs.replaceInFiles(
["workspace/file1.txt", "workspace/file2.txt"],
"old_text",
"new_text"
)
```
```ruby
# Search for TODOs in Ruby files
matches = sandbox.fs.find_files("workspace/src", "TODO:")
matches.each do |match|
puts "#{match.file}:#{match.line}: #{match.content.strip}"
end
# Replace in specific files
results = sandbox.fs.replace_in_files(
files: ["workspace/src/file1.rb", "workspace/src/file2.rb"],
pattern: "old_function",
new_value: "new_function"
)
# Print results
results.each do |result|
if result.success
puts "#{result.file}: #{result.success}"
else
puts "#{result.file}: #{result.error}"
end
end
```
```go
// Search for text in files
result, err := sandbox.FileSystem.FindFiles(ctx, "workspace/src", "text-of-interest")
if err != nil {
log.Fatal(err)
}
matches := result.([]map[string]any)
for _, match := range matches {
fmt.Printf("Absolute file path: %s\n", match["file"])
fmt.Printf("Line number: %v\n", match["line"])
fmt.Printf("Line content: %s\n\n", match["content"])
}
// Replace text in files
_, err = sandbox.FileSystem.ReplaceInFiles(ctx,
[]string{"workspace/file1.txt", "workspace/file2.txt"},
"old_text",
"new_text",
)
if err != nil {
log.Fatal(err)
}
```
Find text in files:
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/files/find?path=&pattern='
```
Replace text in files:
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/files/replace' \
--request POST \
--header 'Content-Type: application/json' \
--data '{
"files": [
""
],
"newValue": "",
"pattern": ""
}'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md) references:
> [**find_files (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/file-system.md#filesystemfind_files)
>
> [**replace_in_files (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/file-system.md#filesystemreplace_in_files)
>
> [**findFiles (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/file-system.md#findfiles)
>
> [**replaceInFiles (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/file-system.md#replaceinfiles)
>
> [**find_files (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/file-system.md#find_files)
>
> [**replace_in_files (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/file-system.md#replace_in_files)
>
> [**FindFiles (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#FileSystemService.FindFiles)
>
> [**ReplaceInFiles (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#FileSystemService.ReplaceInFiles)
>
> [**find text in files (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/file-system/GET/files/find)
>
> [**replace text in files (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/file-system/POST/files/replace)
### Move or rename directory or file
Daytona provides methods to move or rename a directory or file in sandboxes by providing the path to the file or directory (source) and the new path to the file or directory (destination).
```python
# Rename a file
sandbox.fs.move_files(
"workspace/data/old_name.txt",
"workspace/data/new_name.txt"
)
# Move a file to a different directory
sandbox.fs.move_files(
"workspace/data/file.txt",
"workspace/archive/file.txt"
)
# Move a directory
sandbox.fs.move_files(
"workspace/old_dir",
"workspace/new_dir"
)
```
```typescript
// Move a file to a new location
await fs.moveFiles('app/temp/data.json', 'app/data/data.json')
```
```ruby
# Rename a file
sandbox.fs.move_files(
"workspace/data/old_name.txt",
"workspace/data/new_name.txt"
)
# Move a file to a different directory
sandbox.fs.move_files(
"workspace/data/file.txt",
"workspace/archive/file.txt"
)
# Move a directory
sandbox.fs.move_files(
"workspace/old_dir",
"workspace/new_dir"
)
```
```go
// Rename a file
err := sandbox.FileSystem.MoveFiles(ctx, "workspace/data/old_name.txt", "workspace/data/new_name.txt")
if err != nil {
log.Fatal(err)
}
// Move a file to a different directory
err = sandbox.FileSystem.MoveFiles(ctx, "workspace/data/file.txt", "workspace/archive/file.txt")
if err != nil {
log.Fatal(err)
}
// Move a directory
err = sandbox.FileSystem.MoveFiles(ctx, "workspace/old_dir", "workspace/new_dir")
if err != nil {
log.Fatal(err)
}
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/files/move?source=&destination=' \
--request POST
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md), and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox) references:
> [**move_files (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/file-system.md#filesystemmove_files)
>
> [**moveFiles (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/file-system.md#movefiles)
>
> [**move_files (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/file-system.md#move_files)
>
> [**MoveFiles (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#FileSystemService.MoveFiles)
>
> [**move or rename file or directory (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/file-system/POST/files/move)
This section introduces core concepts, common workflows, and next steps for using Daytona.
## Dashboard
[Daytona Dashboard ↗](https://app.daytona.io/) is a visual user interface where you can manage sandboxes, access API keys, view usage, and more.
It serves as the primary point of control for managing your Daytona resources.
## SDKs
Daytona provides [Python](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby](https://www.daytona.io/docs/en/ruby-sdk.md), and [Go](https://www.daytona.io/docs/en/go-sdk.md) SDKs to programmatically interact with sandboxes. They support sandbox lifecycle management, code execution, resource access, and more.
## CLI
Daytona provides command-line access to core features for interacting with Daytona Sandboxes, including managing their lifecycle, snapshots, and more.
To interact with Daytona Sandboxes from the command line, install the Daytona CLI:
```bash
brew install daytonaio/cli/daytona
```
```bash
powershell -Command "irm https://get.daytona.io/windows | iex"
```
After installing the Daytona CLI, use the `daytona` command to interact with Daytona Sandboxes from the command line.
To upgrade the Daytona CLI to the latest version:
```bash
brew upgrade daytonaio/cli/daytona
```
```bash
powershell -Command "irm https://get.daytona.io/windows | iex"
```
To view all available commands and flags, see the [CLI reference](https://www.daytona.io/docs/en/tools/cli.md).
## API
Daytona provides a RESTful API for interacting with Daytona Sandboxes, including managing their lifecycle, snapshots, and more.
It serves as a flexible and powerful way to interact with Daytona from your own applications.
To interact with Daytona Sandboxes from the API, see the [API reference](https://www.daytona.io/docs/en/tools/api.md).
## MCP server
Daytona provides a Model Context Protocol (MCP) server that enables AI agents to interact with Daytona Sandboxes programmatically. The MCP server integrates with popular AI agents including Claude, Cursor, and Windsurf.
To set up the MCP server with your AI agent:
```bash
daytona mcp init [claude/cursor/windsurf]
```
For more information, see the [MCP server documentation](https://www.daytona.io/docs/en/mcp.md).
## Multiple runtime support
Daytona supports multiple programming language runtimes for direct code execution inside the sandbox.
[TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk.md) works across multiple **JavaScript runtimes** including **Node.js**, **browsers**, and **serverless platforms**: Cloudflare Workers, AWS Lambda, Azure Functions, etc.
Using the Daytona SDK in browser-based environments or frameworks like [**Vite**](https://www.daytona.io/docs/en/getting-started.md#daytona-in-vite-projects) and [**Next.js**](https://www.daytona.io/docs/en/getting-started.md#daytona-in-nextjs-projects) requires configuring node polyfills.
### Daytona in Vite projects
When using Daytona SDK in a Vite-based project, configure node polyfills to ensure compatibility.
Add the following configuration to your `vite.config.ts` file in the `plugins` array:
```typescript
import { nodePolyfills } from 'vite-plugin-node-polyfills'
plugins: [
// ... other plugins
nodePolyfills({
globals: { global: true, process: true, Buffer: true },
overrides: {
path: 'path-browserify-win32',
},
}),
],
// ... rest of your config
})
```
### Daytona in Next.js projects
When using Daytona SDK in a Next.js project, configure node polyfills to ensure compatibility with Webpack and Turbopack bundlers.
Add the following configuration to your `next.config.ts` file:
```typescript
import type { NextConfig } from 'next'
import NodePolyfillPlugin from 'node-polyfill-webpack-plugin'
import { env, nodeless } from 'unenv'
const { alias: turbopackAlias } = env(nodeless, {})
const nextConfig: NextConfig = {
// Turbopack
experimental: {
turbo: {
resolveAlias: {
...turbopackAlias,
},
},
},
// Webpack
webpack: (config, { isServer }) => {
if (!isServer) {
config.plugins.push(new NodePolyfillPlugin())
}
return config
},
}
```
## Guides
Daytona provides a comprehensive set of guides to help you get started. The guides cover a wide range of topics, from basic usage to advanced topics, and showcase various types of integrations between Daytona and other tools.
For more information, see [guides](https://www.daytona.io/docs/en/guides.md).
## Examples
Daytona provides quick examples for common sandbox operations and best practices.
The examples are based on the Daytona [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/process.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/process.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk/daytona.md#type-processservice), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/process.md), [CLI](https://www.daytona.io/docs/en/tools/cli.md), and [API](https://www.daytona.io/docs/en/tools/api.md) references. More examples are available in the [GitHub repository ↗](https://github.com/daytonaio/daytona/tree/main/examples).
### Create a sandbox
Create a [sandbox](https://www.daytona.io/docs/en/sandboxes.md) with default settings.
```python
from daytona import Daytona
daytona = Daytona()
sandbox = daytona.create()
print(f"Sandbox ID: {sandbox.id}")
```
```typescript
import { Daytona } from '@daytona/sdk';
const daytona = new Daytona();
const sandbox = await daytona.create();
console.log(`Sandbox ID: ${sandbox.id}`);
```
```go
package main
import (
"context"
"fmt"
"log"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona"
)
func main() {
client, err := daytona.NewClient()
if err != nil {
log.Fatal(err)
}
sandbox, err := client.Create(context.Background(), nil)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Sandbox ID: %s\n", sandbox.ID)
}
```
```ruby
require 'daytona'
daytona = Daytona::Daytona.new
sandbox = daytona.create
puts "Sandbox ID: #{sandbox.id}"
```
```shell
daytona create
```
```bash
curl 'https://app.daytona.io/api/sandbox' \
--request POST \
--header 'Authorization: Bearer ' \
--header 'Content-Type: application/json' \
--data '{}'
```
### Create and run code in a sandbox
Create a [sandbox](https://www.daytona.io/docs/en/sandboxes.md) and run code securely in it.
```python
from daytona import Daytona
daytona = Daytona()
sandbox = daytona.create()
response = sandbox.process.exec("echo 'Hello, World!'")
print(response.result)
sandbox.delete()
```
```typescript
import { Daytona } from '@daytona/sdk';
const daytona = new Daytona();
const sandbox = await daytona.create();
const response = await sandbox.process.executeCommand('echo "Hello, World!"');
console.log(response.result);
await sandbox.delete();
```
```go
package main
import (
"context"
"fmt"
"log"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona"
)
func main() {
client, err := daytona.NewClient()
if err != nil {
log.Fatal(err)
}
sandbox, err := client.Create(context.Background(), nil)
if err != nil {
log.Fatal(err)
}
response, err := sandbox.Process.ExecuteCommand(context.Background(), "echo 'Hello, World!'")
if err != nil {
log.Fatal(err)
}
fmt.Println(response.Result)
sandbox.Delete(context.Background())
}
```
```ruby
require 'daytona'
daytona = Daytona::Daytona.new
sandbox = daytona.create
response = sandbox.process.exec(command: "echo 'Hello, World!'")
puts response.result
daytona.delete(sandbox)
```
```shell
daytona create --name my-sandbox
daytona exec my-sandbox -- echo 'Hello, World!'
daytona delete my-sandbox
```
```bash
# Create a sandbox
curl 'https://app.daytona.io/api/sandbox' \
--request POST \
--header 'Authorization: Bearer ' \
--header 'Content-Type: application/json' \
--data '{}'
# Execute a command in the sandbox
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/process/execute' \
--request POST \
--header 'Authorization: Bearer ' \
--header 'Content-Type: application/json' \
--data '{
"command": "echo '\''Hello, World!'\''"
}'
# Delete the sandbox
curl 'https://app.daytona.io/api/sandbox/{sandboxId}' \
--request DELETE \
--header 'Authorization: Bearer '
```
### Create a sandbox with custom resources
Create a sandbox with [custom resources](https://www.daytona.io/docs/en/sandboxes.md#resources) (CPU, memory, disk).
```python
from daytona import Daytona, CreateSandboxFromImageParams, Image, Resources
daytona = Daytona()
sandbox = daytona.create(
CreateSandboxFromImageParams(
image=Image.debian_slim("3.12"),
resources=Resources(cpu=2, memory=4, disk=8)
)
)
```
```typescript
import { Daytona, Image } from '@daytona/sdk';
const daytona = new Daytona();
const sandbox = await daytona.create({
image: Image.debianSlim('3.12'),
resources: { cpu: 2, memory: 4, disk: 8 }
});
```
```go
package main
import (
"context"
"log"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/types"
)
func main() {
client, err := daytona.NewClient()
if err != nil {
log.Fatal(err)
}
sandbox, err := client.Create(context.Background(), types.ImageParams{
Image: daytona.DebianSlim(nil),
Resources: &types.Resources{
CPU: 2,
Memory: 4,
Disk: 8,
},
})
if err != nil {
log.Fatal(err)
}
}
```
```ruby
require 'daytona'
daytona = Daytona::Daytona.new
sandbox = daytona.create(
Daytona::CreateSandboxFromImageParams.new(
image: Daytona::Image.debian_slim('3.12'),
resources: Daytona::Resources.new(cpu: 2, memory: 4, disk: 8)
)
)
```
```shell
daytona create --class small
```
```bash
curl 'https://app.daytona.io/api/sandbox' \
--request POST \
--header 'Authorization: Bearer ' \
--header 'Content-Type: application/json' \
--data '{
"cpu": 2,
"memory": 4,
"disk": 8
}'
```
### Create an ephemeral sandbox
Create an [ephemeral sandbox](https://www.daytona.io/docs/en/sandboxes.md#ephemeral-sandboxes) that is automatically deleted when stopped.
```python
from daytona import Daytona, CreateSandboxFromSnapshotParams
daytona = Daytona()
sandbox = daytona.create(
CreateSandboxFromSnapshotParams(ephemeral=True, auto_stop_interval=5)
)
```
```typescript
import { Daytona } from '@daytona/sdk';
const daytona = new Daytona();
const sandbox = await daytona.create({
ephemeral: true,
autoStopInterval: 5
});
```
```go
package main
import (
"context"
"log"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/types"
)
func main() {
client, err := daytona.NewClient()
if err != nil {
log.Fatal(err)
}
autoStop := 5
sandbox, err := client.Create(context.Background(), types.SnapshotParams{
SandboxBaseParams: types.SandboxBaseParams{
Ephemeral: true,
AutoStopInterval: &autoStop,
},
})
if err != nil {
log.Fatal(err)
}
}
```
```ruby
require 'daytona'
daytona = Daytona::Daytona.new
sandbox = daytona.create(
Daytona::CreateSandboxFromSnapshotParams.new(ephemeral: true, auto_stop_interval: 5)
)
```
```shell
daytona create --auto-stop 5 --auto-delete 0
```
```bash
curl 'https://app.daytona.io/api/sandbox' \
--request POST \
--header 'Authorization: Bearer ' \
--header 'Content-Type: application/json' \
--data '{
"autoStopInterval": 5,
"autoDeleteInterval": 0
}'
```
### Create a sandbox from a snapshot
Create a sandbox from a pre-built [snapshot](https://www.daytona.io/docs/en/snapshots.md) for faster sandbox creation with pre-installed dependencies.
```python
from daytona import Daytona, CreateSandboxFromSnapshotParams
daytona = Daytona()
sandbox = daytona.create(
CreateSandboxFromSnapshotParams(
snapshot="my-snapshot-name",
language="python"
)
)
```
```typescript
import { Daytona } from '@daytona/sdk';
const daytona = new Daytona();
const sandbox = await daytona.create({
snapshot: 'my-snapshot-name',
language: 'typescript'
});
```
```go
package main
import (
"context"
"log"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/types"
)
func main() {
client, err := daytona.NewClient()
if err != nil {
log.Fatal(err)
}
sandbox, err := client.Create(context.Background(), types.SnapshotParams{
Snapshot: "my-snapshot-name",
SandboxBaseParams: types.SandboxBaseParams{
Language: types.CodeLanguagePython,
},
})
if err != nil {
log.Fatal(err)
}
}
```
```ruby
require 'daytona'
daytona = Daytona::Daytona.new
sandbox = daytona.create(
Daytona::CreateSandboxFromSnapshotParams.new(
snapshot: 'my-snapshot-name',
language: Daytona::CodeLanguage::PYTHON
)
)
```
```shell
daytona create --snapshot my-snapshot-name
```
```bash
curl 'https://app.daytona.io/api/sandbox' \
--request POST \
--header 'Authorization: Bearer ' \
--header 'Content-Type: application/json' \
--data '{
"snapshot": "my-snapshot-name"
}'
```
### Create a sandbox with a declarative image
Create a sandbox with a [declarative image](https://www.daytona.io/docs/en/declarative-builder.md) that defines dependencies programmatically.
```python
from daytona import Daytona, CreateSandboxFromImageParams, Image
daytona = Daytona()
image = (
Image.debian_slim("3.12")
.pip_install(["requests", "pandas", "numpy"])
.workdir("/home/daytona")
)
sandbox = daytona.create(
CreateSandboxFromImageParams(image=image),
on_snapshot_create_logs=print
)
```
```typescript
import { Daytona, Image } from '@daytona/sdk';
const daytona = new Daytona();
const image = Image.debianSlim('3.12')
.pipInstall(['requests', 'pandas', 'numpy'])
.workdir('/home/daytona');
const sandbox = await daytona.create(
{ image },
{ onSnapshotCreateLogs: console.log }
);
```
```go
package main
import (
"context"
"log"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/types"
)
func main() {
client, err := daytona.NewClient()
if err != nil {
log.Fatal(err)
}
image := daytona.DebianSlim(nil).
PipInstall([]string{"requests", "pandas", "numpy"}).
Workdir("/home/daytona")
sandbox, err := client.Create(context.Background(), types.ImageParams{
Image: image,
})
if err != nil {
log.Fatal(err)
}
}
```
```ruby
require 'daytona'
daytona = Daytona::Daytona.new
image = Daytona::Image
.debian_slim('3.12')
.pip_install(['requests', 'pandas', 'numpy'])
.workdir('/home/daytona')
sandbox = daytona.create(
Daytona::CreateSandboxFromImageParams.new(image: image),
on_snapshot_create_logs: proc { |chunk| puts chunk }
)
```
```shell
daytona create --dockerfile ./Dockerfile
```
```bash
curl 'https://app.daytona.io/api/sandbox' \
--request POST \
--header 'Authorization: Bearer ' \
--header 'Content-Type: application/json' \
--data '{
"buildInfo": {
"dockerfileContent": "FROM python:3.12-slim\nRUN pip install requests pandas numpy\nWORKDIR /home/daytona"
}
}'
```
### Create a sandbox with volumes
Create a sandbox with a [volume](https://www.daytona.io/docs/en/volumes.md) mounted to share data across sandboxes.
```python
from daytona import Daytona, CreateSandboxFromSnapshotParams, VolumeMount
daytona = Daytona()
volume = daytona.volume.get("my-volume", create=True)
sandbox = daytona.create(
CreateSandboxFromSnapshotParams(
volumes=[VolumeMount(volume_id=volume.id, mount_path="/home/daytona/data")]
)
)
```
```typescript
import { Daytona } from '@daytona/sdk';
const daytona = new Daytona();
const volume = await daytona.volume.get('my-volume', true);
const sandbox = await daytona.create({
volumes: [{ volumeId: volume.id, mountPath: '/home/daytona/data' }]
});
```
```go
package main
import (
"context"
"log"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/types"
)
func main() {
client, err := daytona.NewClient()
if err != nil {
log.Fatal(err)
}
volume, err := client.Volume.Get(context.Background(), "my-volume")
if err != nil {
volume, err = client.Volume.Create(context.Background(), "my-volume")
if err != nil {
log.Fatal(err)
}
}
sandbox, err := client.Create(context.Background(), types.SnapshotParams{
SandboxBaseParams: types.SandboxBaseParams{
Volumes: []types.VolumeMount{{
VolumeID: volume.ID,
MountPath: "/home/daytona/data",
}},
},
})
if err != nil {
log.Fatal(err)
}
}
```
```ruby
require 'daytona'
daytona = Daytona::Daytona.new
volume = daytona.volume.get('my-volume', create: true)
sandbox = daytona.create(
Daytona::CreateSandboxFromSnapshotParams.new(
volumes: [DaytonaApiClient::SandboxVolume.new(
volume_id: volume.id,
mount_path: '/home/daytona/data'
)]
)
)
```
```shell
daytona volume create my-volume
daytona create --volume my-volume:/home/daytona/data
```
```bash
curl 'https://app.daytona.io/api/sandbox' \
--request POST \
--header 'Authorization: Bearer ' \
--header 'Content-Type: application/json' \
--data '{
"volumes": [
{
"volumeId": "",
"mountPath": "/home/daytona/data"
}
]
}'
```
### Create a sandbox with a Git repository cloned
Create a sandbox with a [Git repository](https://www.daytona.io/docs/en/typescript-sdk/git.md) cloned to manage version control.
```python
from daytona import Daytona
daytona = Daytona()
sandbox = daytona.create()
sandbox.git.clone("https://github.com/daytonaio/daytona.git", "/home/daytona/daytona")
status = sandbox.git.status("/home/daytona/daytona")
print(f"Branch: {status.current_branch}")
```
```typescript
import { Daytona } from '@daytona/sdk';
const daytona = new Daytona();
const sandbox = await daytona.create();
await sandbox.git.clone('https://github.com/daytonaio/daytona.git', '/home/daytona/daytona');
const status = await sandbox.git.status('/home/daytona/daytona');
console.log(`Branch: ${status.currentBranch}`);
```
```go
package main
import (
"context"
"fmt"
"log"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona"
)
func main() {
client, err := daytona.NewClient()
if err != nil {
log.Fatal(err)
}
sandbox, err := client.Create(context.Background(), nil)
if err != nil {
log.Fatal(err)
}
sandbox.Git.Clone(context.Background(), "https://github.com/daytonaio/daytona.git", "/home/daytona/daytona")
status, err := sandbox.Git.Status(context.Background(), "/home/daytona/daytona")
if err != nil {
log.Fatal(err)
}
fmt.Printf("Branch: %s\n", status.CurrentBranch)
}
```
```ruby
require 'daytona'
daytona = Daytona::Daytona.new
sandbox = daytona.create
sandbox.git.clone(url: "https://github.com/daytonaio/daytona.git", path: "/home/daytona/daytona")
status = sandbox.git.status("/home/daytona/daytona")
puts "Branch: #{status.current_branch}"
```
```bash
# Create a sandbox
curl 'https://app.daytona.io/api/sandbox' \
--request POST \
--header 'Authorization: Bearer ' \
--header 'Content-Type: application/json' \
--data '{}'
# Clone a Git repository in the sandbox
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/git/clone' \
--request POST \
--header 'Authorization: Bearer ' \
--header 'Content-Type: application/json' \
--data '{
"url": "https://github.com/daytonaio/daytona.git",
"path": "/home/daytona/daytona"
}'
# Get repository status
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/git/status?path=/home/daytona/daytona' \
--header 'Authorization: Bearer '
```
Daytona provides built-in Git support through the `git` module in sandboxes.
## Basic operations
Daytona provides methods to clone, check status, and manage Git repositories in sandboxes.
Similar to [file system operations](https://www.daytona.io/docs/en/file-system-operations.md), the starting cloning directory is the current sandbox working directory. It uses the WORKDIR specified in the Dockerfile if present, or falls back to the user's home directory if not - e.g. `workspace/repo` implies `/my-work-dir/workspace/repo`, but you are free to provide an absolute `workDir` path as well (by starting the path with `/`).
### Clone repositories
Daytona provides methods to clone Git repositories into sandboxes. You can clone public or private repositories, specific branches, and authenticate using personal access tokens.
```python
# Basic clone
sandbox.git.clone(
url="https://github.com/user/repo.git",
path="workspace/repo"
)
# Clone with authentication
sandbox.git.clone(
url="https://github.com/user/repo.git",
path="workspace/repo",
username="git",
password="personal_access_token"
)
# Clone specific branch
sandbox.git.clone(
url="https://github.com/user/repo.git",
path="workspace/repo",
branch="develop"
)
```
```typescript
// Basic clone
await sandbox.git.clone(
"https://github.com/user/repo.git",
"workspace/repo"
);
// Clone with authentication
await sandbox.git.clone(
"https://github.com/user/repo.git",
"workspace/repo",
undefined,
undefined,
"git",
"personal_access_token"
);
// Clone specific branch
await sandbox.git.clone(
"https://github.com/user/repo.git",
"workspace/repo",
"develop"
);
```
```ruby
# Basic clone
sandbox.git.clone(
url: 'https://github.com/user/repo.git',
path: 'workspace/repo'
)
# Clone with authentication
sandbox.git.clone(
url: 'https://github.com/user/repo.git',
path: 'workspace/repo',
username: 'git',
password: 'personal_access_token'
)
# Clone specific branch
sandbox.git.clone(
url: 'https://github.com/user/repo.git',
path: 'workspace/repo',
branch: 'develop'
)
```
```go
// Basic clone
err := sandbox.Git.Clone(ctx, "https://github.com/user/repo.git", "workspace/repo")
if err != nil {
log.Fatal(err)
}
// Clone with authentication
err = sandbox.Git.Clone(ctx, "https://github.com/user/repo.git", "workspace/repo",
options.WithUsername("git"),
options.WithPassword("personal_access_token"),
)
if err != nil {
log.Fatal(err)
}
// Clone specific branch
err = sandbox.Git.Clone(ctx, "https://github.com/user/repo.git", "workspace/repo",
options.WithBranch("develop"),
)
if err != nil {
log.Fatal(err)
}
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/git/clone' \
--request POST \
--header 'Content-Type: application/json' \
--data '{
"branch": "",
"commit_id": "",
"password": "",
"path": "",
"url": "",
"username": ""
}'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/git.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/git.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/git.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md) and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox) references:
> [**clone (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/git.md#gitclone)
>
> [**clone (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/git.md#clone)
>
> [**clone (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/git.md#clone)
>
> [**Clone (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#GitService.Clone)
>
> [**clone repository (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/git/POST/git/clone)
### Get repository status
Daytona provides methods to check the status of Git repositories in sandboxes. You can get the current branch, modified files, number of commits ahead and behind main branch.
```python
# Get repository status
status = sandbox.git.status("workspace/repo")
print(f"Current branch: {status.current_branch}")
print(f"Commits ahead: {status.ahead}")
print(f"Commits behind: {status.behind}")
for file in status.file_status:
print(f"File: {file.name}")
# List branches
response = sandbox.git.branches("workspace/repo")
for branch in response.branches:
print(f"Branch: {branch}")
```
```typescript
// Get repository status
const status = await sandbox.git.status("workspace/repo");
console.log(`Current branch: ${status.currentBranch}`);
console.log(`Commits ahead: ${status.ahead}`);
console.log(`Commits behind: ${status.behind}`);
status.fileStatus.forEach(file => {
console.log(`File: ${file.name}`);
});
// List branches
const response = await sandbox.git.branches("workspace/repo");
response.branches.forEach(branch => {
console.log(`Branch: ${branch}`);
});
```
```ruby
# Get repository status
status = sandbox.git.status('workspace/repo')
puts "Current branch: #{status.current_branch}"
puts "Commits ahead: #{status.ahead}"
puts "Commits behind: #{status.behind}"
status.file_status.each do |file|
puts "File: #{file.name}"
end
# List branches
response = sandbox.git.branches('workspace/repo')
response.branches.each do |branch|
puts "Branch: #{branch}"
end
```
```go
// Get repository status
status, err := sandbox.Git.Status(ctx, "workspace/repo")
if err != nil {
log.Fatal(err)
}
fmt.Printf("Current branch: %s\n", status.CurrentBranch)
fmt.Printf("Commits ahead: %d\n", status.Ahead)
fmt.Printf("Commits behind: %d\n", status.Behind)
for _, file := range status.FileStatus {
fmt.Printf("File: %s\n", file.Path)
}
// List branches
branches, err := sandbox.Git.Branches(ctx, "workspace/repo")
if err != nil {
log.Fatal(err)
}
for _, branch := range branches {
fmt.Printf("Branch: %s\n", branch)
}
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/git/status?path='
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/git.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/git.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/git.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md) and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox) references:
> [**status (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/git.md#gitstatus)
>
> [**status (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/git.md#status)
>
> [**status (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/git.md#status)
>
> [**Status (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#GitService.Status)
>
> [**get Git repository status (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/git/GET/git/status)
## Branch operations
Daytona provides methods to manage branches in Git repositories. You can create, switch, and delete branches.
### Create branches
Daytona provides methods to create branches in Git repositories. The following snippet creates a new branch called `new-feature`.
```python
# Create a new branch
sandbox.git.create_branch("workspace/repo", "new-feature")
```
```typescript
// Create new branch
await git.createBranch('workspace/repo', 'new-feature');
```
```ruby
# Create a new branch
sandbox.git.create_branch('workspace/repo', 'new-feature')
```
```go
// Create a new branch
err := sandbox.Git.CreateBranch(ctx, "workspace/repo", "new-feature")
if err != nil {
log.Fatal(err)
}
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/git/branches' \
--request POST \
--header 'Content-Type: application/json' \
--data '{
"name": "",
"path": ""
}'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/git.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/git.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/git.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md) and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox) references:
> [**create_branch (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/git.md#gitcreate_branch)
>
> [**createBranch (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/git.md#createbranch)
>
> [**create_branch (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/git.md#create_branch)
>
> [**CreateBranch (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#GitService.CreateBranch)
>
> [**create branch (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/git/POST/git/branches)
### Checkout branches
Daytona provides methods to checkout branches in Git repositories. The following snippet checks out the branch called `feature-branch`.
```python
# Checkout a branch
sandbox.git.checkout_branch("workspace/repo", "feature-branch")
```
```typescript
// Checkout a branch
await git.checkoutBranch('workspace/repo', 'feature-branch');
```
```ruby
# Checkout a branch
sandbox.git.checkout_branch('workspace/repo', 'feature-branch')
```
```go
// Checkout a branch
err := sandbox.Git.Checkout(ctx, "workspace/repo", "feature-branch")
if err != nil {
log.Fatal(err)
}
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/git/checkout' \
--request POST \
--header 'Content-Type: application/json' \
--data '{
"branch": "",
"path": ""
}'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/git.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/git.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/git.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md) and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox) references:
> [**checkout_branch (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/git.md#gitcheckout_branch)
>
> [**checkoutBranch (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/git.md#checkoutbranch)
>
> [**checkout_branch (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/git.md#checkout_branch)
>
> [**Checkout (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#GitService.Checkout)
>
> [**checkout branch (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/git/POST/git/checkout)
### Delete branches
Daytona provides methods to delete branches in Git repositories. The following snippet deletes the branch called `old-feature`.
```python
# Delete a branch
sandbox.git.delete_branch("workspace/repo", "old-feature")
```
```typescript
// Delete a branch
await git.deleteBranch('workspace/repo', 'old-feature');
```
```ruby
# Delete a branch
sandbox.git.delete_branch('workspace/repo', 'old-feature')
```
```go
// Delete a branch
err := sandbox.Git.DeleteBranch(ctx, "workspace/repo", "old-feature")
if err != nil {
log.Fatal(err)
}
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/git/branches' \
--request DELETE \
--header 'Content-Type: application/json' \
--data '{
"name": "",
"path": ""
}'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/git.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/git.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/git.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md) and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox) references:
> [**delete_branch (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/git.md#gitdelete_branch)
>
> [**deleteBranch (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/git.md#deletebranch)
>
> [**delete_branch (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/git.md#delete_branch)
>
> [**DeleteBranch (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#GitService.DeleteBranch)
>
> [**delete branch (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/git/DELETE/git/branches)
## Stage changes
Daytona provides methods to stage changes in Git repositories. You can stage specific files, all changes, and commit with a message. The following snippet stages the file `file.txt` and the `src` directory.
```python
# Stage a single file
sandbox.git.add("workspace/repo", ["file.txt"])
# Stage multiple files
sandbox.git.add("workspace/repo", [
"src/main.py",
"tests/test_main.py",
"README.md"
])
```
```typescript
// Stage a single file
await git.add('workspace/repo', ['file.txt']);
// Stage whole repository
await git.add('workspace/repo', ['.']);
```
```ruby
# Stage a single file
sandbox.git.add('workspace/repo', ['file.txt'])
```
```go
// Stage a single file
err := sandbox.Git.Add(ctx, "workspace/repo", []string{"file.txt"})
if err != nil {
log.Fatal(err)
}
// Stage multiple files
err = sandbox.Git.Add(ctx, "workspace/repo", []string{
"src/main.py",
"tests/test_main.py",
"README.md",
})
if err != nil {
log.Fatal(err)
}
// Stage whole repository
err = sandbox.Git.Add(ctx, "workspace/repo", []string{"."})
if err != nil {
log.Fatal(err)
}
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/git/add' \
--request POST \
--header 'Content-Type: application/json' \
--data '{
"files": [
""
],
"path": ""
}'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/git.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/git.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/git.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md) and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox) references:
> [**add (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/git.md#gitadd)
>
> [**add (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/git.md#add)
>
> [**add (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/git.md#add)
>
> [**Add (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#GitService.Add)
>
> [**add (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/git/POST/git/add)
## Commit changes
Daytona provides methods to commit changes in Git repositories. You can commit with a message, author, and email. The following snippet commits the changes with the message `Update documentation` and the author `John Doe` and email `john@example.com`.
```python
# Stage and commit changes
sandbox.git.add("workspace/repo", ["README.md"])
sandbox.git.commit(
path="workspace/repo",
message="Update documentation",
author="John Doe",
email="john@example.com",
allow_empty=True
)
```
```typescript
// Stage and commit changes
await git.add('workspace/repo', ['README.md']);
await git.commit(
'workspace/repo',
'Update documentation',
'John Doe',
'john@example.com',
true
);
```
```ruby
# Stage and commit changes
sandbox.git.add('workspace/repo', ['README.md'])
sandbox.git.commit('workspace/repo', 'Update documentation', 'John Doe', 'john@example.com', true)
```
```go
// Stage and commit changes
err := sandbox.Git.Add(ctx, "workspace/repo", []string{"README.md"})
if err != nil {
log.Fatal(err)
}
response, err := sandbox.Git.Commit(ctx, "workspace/repo",
"Update documentation",
"John Doe",
"john@example.com",
options.WithAllowEmpty(true),
)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Commit SHA: %s\n", response.SHA)
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/git/commit' \
--request POST \
--header 'Content-Type: application/json' \
--data '{
"allow_empty": true,
"author": "",
"email": "",
"message": "",
"path": ""
}'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/git.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/git.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/git.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md) and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox) references:
> [**commit (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/git.md#gitcommit)
>
> [**commit (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/git.md#commit)
>
> [**commit (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/git.md#commit)
>
> [**Commit (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#GitService.Commit)
>
> [**commit (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/git/POST/git/commit)
## Remote operations
Daytona provides methods to work with remote repositories in Git. You can push and pull changes from remote repositories.
### Push changes
Daytona provides methods to push changes to remote repositories. The following snippet pushes the changes to a public repository.
```python
# Push without authentication (for public repos or SSH)
sandbox.git.push("workspace/repo")
# Push with authentication
sandbox.git.push(
path="workspace/repo",
username="user",
password="github_token"
)
```
```typescript
// Push to a public repository
await git.push('workspace/repo');
// Push to a private repository
await git.push(
'workspace/repo',
'user',
'token'
);
```
```ruby
# Push changes
sandbox.git.push('workspace/repo')
```
```go
// Push without authentication (for public repos or SSH)
err := sandbox.Git.Push(ctx, "workspace/repo")
if err != nil {
log.Fatal(err)
}
// Push with authentication
err = sandbox.Git.Push(ctx, "workspace/repo",
options.WithPushUsername("user"),
options.WithPushPassword("github_token"),
)
if err != nil {
log.Fatal(err)
}
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/git/push' \
--request POST \
--header 'Content-Type: application/json' \
--data '{
"password": "",
"path": "",
"username": ""
}'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/git.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/git.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/git.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md) and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox) references:
> [**push (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/git.md#gitpush)
>
> [**push (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/git.md#push)
>
> [**push (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/git.md#push)
>
> [**Push (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#GitService.Push)
>
> [**push (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/git/POST/git/push)
### Pull changes
Daytona provides methods to pull changes from remote repositories. The following snippet pulls the changes from a public repository.
```python
# Pull without authentication
sandbox.git.pull("workspace/repo")
# Pull with authentication
sandbox.git.pull(
path="workspace/repo",
username="user",
password="github_token"
)
```
```typescript
// Pull from a public repository
await git.pull('workspace/repo');
// Pull from a private repository
await git.pull(
'workspace/repo',
'user',
'token'
);
```
```ruby
# Pull changes
sandbox.git.pull('workspace/repo')
```
```go
// Pull without authentication
err := sandbox.Git.Pull(ctx, "workspace/repo")
if err != nil {
log.Fatal(err)
}
// Pull with authentication
err = sandbox.Git.Pull(ctx, "workspace/repo",
options.WithPullUsername("user"),
options.WithPullPassword("github_token"),
)
if err != nil {
log.Fatal(err)
}
```
```bash
curl 'https://proxy.app.daytona.io/toolbox/{sandboxId}/git/pull' \
--request POST \
--header 'Content-Type: application/json' \
--data '{
"password": "",
"path": "",
"username": ""
}'
```
For more information, see the [Python SDK](https://www.daytona.io/docs/en/python-sdk/sync/git.md), [TypeScript SDK](https://www.daytona.io/docs/en/typescript-sdk/git.md), [Ruby SDK](https://www.daytona.io/docs/en/ruby-sdk/git.md), [Go SDK](https://www.daytona.io/docs/en/go-sdk.md) and [API](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox) references:
> [**pull (Python SDK)**](https://www.daytona.io/docs/en/python-sdk/sync/git.md#gitpull)
>
> [**pull (TypeScript SDK)**](https://www.daytona.io/docs/en/typescript-sdk/git.md#pull)
>
> [**pull (Ruby SDK)**](https://www.daytona.io/docs/en/ruby-sdk/git.md#pull)
>
> [**Pull (Go SDK)**](https://www.daytona.io/docs/en/go-sdk/daytona.md#GitService.Pull)
>
> [**pull (API)**](https://www.daytona.io/docs/en/tools/api.md#daytona-toolbox/tag/git/POST/git/pull)
{/* Code generated by gomarkdoc. DO NOT EDIT */}
# daytona
```go
import "github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona"
```
Package daytona provides a Go SDK for interacting with the Daytona platform.
The Daytona SDK enables developers to programmatically create, manage, and interact with sandboxes \- isolated development environments that can run code, execute commands, and manage files.
### Getting Started
Create a client using your API key or JWT token:
```
client, err := daytona.NewClient()
if err != nil {
log.Fatal(err)
}
```
The client reads configuration from environment variables:
- DAYTONA\_API\_KEY: API key for authentication
- DAYTONA\_JWT\_TOKEN: JWT token for authentication \(alternative to API key\)
- DAYTONA\_ORGANIZATION\_ID: Organization ID \(required when using JWT token\)
- DAYTONA\_API\_URL: API URL \(defaults to https://app.daytona.io/api\)
- DAYTONA\_TARGET: Target environment
Or provide configuration explicitly:
```
client, err := daytona.NewClientWithConfig(&types.DaytonaConfig{
APIKey: "your-api-key",
APIUrl: "https://your-instance.daytona.io/api",
})
```
### Creating Sandboxes
Create a sandbox from a snapshot:
```
sandbox, err := client.Create(ctx, types.SnapshotParams{
Snapshot: "my-snapshot",
})
```
Create a sandbox from a Docker image:
```
sandbox, err := client.Create(ctx, types.ImageParams{
Image: "python:3.11",
})
```
### Working with Sandboxes
Execute code in a sandbox:
```
result, err := sandbox.Process.CodeRun(ctx, "print('Hello, World!')")
```
Run shell commands:
```
result, err := sandbox.Process.ExecuteCommand(ctx, "ls -la")
```
## Index
- [Variables](https://www.daytona.io/docs/%3C.md#variables%3E)
- [type Client](https://www.daytona.io/docs/%3C.md#Client%3E)
- [func NewClient\(\) \(\*Client, error\)](https://www.daytona.io/docs/%3C.md#NewClient%3E)
- [func NewClientWithConfig\(config \*types.DaytonaConfig\) \(\*Client, error\)](https://www.daytona.io/docs/%3C.md#NewClientWithConfig%3E)
- [func \(c \*Client\) Close\(ctx context.Context\) error](https://www.daytona.io/docs/%3C.md#Client.Close%3E)
- [func \(c \*Client\) Create\(ctx context.Context, params any, opts ...func\(\*options.CreateSandbox\)\) \(\*Sandbox, error\)](https://www.daytona.io/docs/%3C.md#Client.Create%3E)
- [func \(c \*Client\) Get\(ctx context.Context, sandboxIDOrName string\) \(\*Sandbox, error\)](https://www.daytona.io/docs/%3C.md#Client.Get%3E)
- [func \(c \*Client\) List\(ctx context.Context, labels map\[string\]string, page \*int, limit \*int\) \(\*PaginatedSandboxes, error\)](<#Client.List>)
- [type CodeInterpreterService](https://www.daytona.io/docs/%3C.md#CodeInterpreterService%3E)
- [func NewCodeInterpreterService\(toolboxClient \*toolbox.APIClient, otel \*otelState\) \*CodeInterpreterService](https://www.daytona.io/docs/%3C.md#NewCodeInterpreterService%3E)
- [func \(c \*CodeInterpreterService\) CreateContext\(ctx context.Context, cwd \*string\) \(map\[string\]any, error\)](<#CodeInterpreterService.CreateContext>)
- [func \(c \*CodeInterpreterService\) DeleteContext\(ctx context.Context, contextID string\) error](https://www.daytona.io/docs/%3C.md#CodeInterpreterService.DeleteContext%3E)
- [func \(c \*CodeInterpreterService\) ListContexts\(ctx context.Context\) \(\[\]map\[string\]any, error\)](<#CodeInterpreterService.ListContexts>)
- [func \(c \*CodeInterpreterService\) RunCode\(ctx context.Context, code string, opts ...func\(\*options.RunCode\)\) \(\*OutputChannels, error\)](https://www.daytona.io/docs/%3C.md#CodeInterpreterService.RunCode%3E)
- [type ComputerUseService](https://www.daytona.io/docs/%3C.md#ComputerUseService%3E)
- [func NewComputerUseService\(toolboxClient \*toolbox.APIClient, otel \*otelState\) \*ComputerUseService](https://www.daytona.io/docs/%3C.md#NewComputerUseService%3E)
- [func \(c \*ComputerUseService\) Display\(\) \*DisplayService](https://www.daytona.io/docs/%3C.md#ComputerUseService.Display%3E)
- [func \(c \*ComputerUseService\) GetStatus\(ctx context.Context\) \(map\[string\]any, error\)](<#ComputerUseService.GetStatus>)
- [func \(c \*ComputerUseService\) Keyboard\(\) \*KeyboardService](https://www.daytona.io/docs/%3C.md#ComputerUseService.Keyboard%3E)
- [func \(c \*ComputerUseService\) Mouse\(\) \*MouseService](https://www.daytona.io/docs/%3C.md#ComputerUseService.Mouse%3E)
- [func \(c \*ComputerUseService\) Recording\(\) \*RecordingService](https://www.daytona.io/docs/%3C.md#ComputerUseService.Recording%3E)
- [func \(c \*ComputerUseService\) Screenshot\(\) \*ScreenshotService](https://www.daytona.io/docs/%3C.md#ComputerUseService.Screenshot%3E)
- [func \(c \*ComputerUseService\) Start\(ctx context.Context\) error](https://www.daytona.io/docs/%3C.md#ComputerUseService.Start%3E)
- [func \(c \*ComputerUseService\) Stop\(ctx context.Context\) error](https://www.daytona.io/docs/%3C.md#ComputerUseService.Stop%3E)
- [type DisplayService](https://www.daytona.io/docs/%3C.md#DisplayService%3E)
- [func NewDisplayService\(toolboxClient \*toolbox.APIClient, otel \*otelState\) \*DisplayService](https://www.daytona.io/docs/%3C.md#NewDisplayService%3E)
- [func \(d \*DisplayService\) GetInfo\(ctx context.Context\) \(map\[string\]any, error\)](<#DisplayService.GetInfo>)
- [func \(d \*DisplayService\) GetWindows\(ctx context.Context\) \(map\[string\]any, error\)](<#DisplayService.GetWindows>)
- [type DockerImage](https://www.daytona.io/docs/%3C.md#DockerImage%3E)
- [func Base\(baseImage string\) \*DockerImage](https://www.daytona.io/docs/%3C.md#Base%3E)
- [func DebianSlim\(pythonVersion \*string\) \*DockerImage](https://www.daytona.io/docs/%3C.md#DebianSlim%3E)
- [func FromDockerfile\(dockerfile string\) \*DockerImage](https://www.daytona.io/docs/%3C.md#FromDockerfile%3E)
- [func \(img \*DockerImage\) Add\(source, destination string\) \*DockerImage](https://www.daytona.io/docs/%3C.md#DockerImage.Add%3E)
- [func \(img \*DockerImage\) AddLocalDir\(localPath, remotePath string\) \*DockerImage](https://www.daytona.io/docs/%3C.md#DockerImage.AddLocalDir%3E)
- [func \(img \*DockerImage\) AddLocalFile\(localPath, remotePath string\) \*DockerImage](https://www.daytona.io/docs/%3C.md#DockerImage.AddLocalFile%3E)
- [func \(img \*DockerImage\) AptGet\(packages \[\]string\) \*DockerImage](<#DockerImage.AptGet>)
- [func \(img \*DockerImage\) Cmd\(cmd \[\]string\) \*DockerImage](<#DockerImage.Cmd>)
- [func \(img \*DockerImage\) Contexts\(\) \[\]DockerImageContext](<#DockerImage.Contexts>)
- [func \(img \*DockerImage\) Copy\(source, destination string\) \*DockerImage](https://www.daytona.io/docs/%3C.md#DockerImage.Copy%3E)
- [func \(img \*DockerImage\) Dockerfile\(\) string](https://www.daytona.io/docs/%3C.md#DockerImage.Dockerfile%3E)
- [func \(img \*DockerImage\) Entrypoint\(cmd \[\]string\) \*DockerImage](<#DockerImage.Entrypoint>)
- [func \(img \*DockerImage\) Env\(key, value string\) \*DockerImage](https://www.daytona.io/docs/%3C.md#DockerImage.Env%3E)
- [func \(img \*DockerImage\) Expose\(ports \[\]int\) \*DockerImage](<#DockerImage.Expose>)
- [func \(img \*DockerImage\) Label\(key, value string\) \*DockerImage](https://www.daytona.io/docs/%3C.md#DockerImage.Label%3E)
- [func \(img \*DockerImage\) PipInstall\(packages \[\]string, opts ...func\(\*options.PipInstall\)\) \*DockerImage](<#DockerImage.PipInstall>)
- [func \(img \*DockerImage\) Run\(command string\) \*DockerImage](https://www.daytona.io/docs/%3C.md#DockerImage.Run%3E)
- [func \(img \*DockerImage\) User\(username string\) \*DockerImage](https://www.daytona.io/docs/%3C.md#DockerImage.User%3E)
- [func \(img \*DockerImage\) Volume\(paths \[\]string\) \*DockerImage](<#DockerImage.Volume>)
- [func \(img \*DockerImage\) Workdir\(path string\) \*DockerImage](https://www.daytona.io/docs/%3C.md#DockerImage.Workdir%3E)
- [type DockerImageContext](https://www.daytona.io/docs/%3C.md#DockerImageContext%3E)
- [type FileSystemService](https://www.daytona.io/docs/%3C.md#FileSystemService%3E)
- [func NewFileSystemService\(toolboxClient \*toolbox.APIClient, otel \*otelState\) \*FileSystemService](https://www.daytona.io/docs/%3C.md#NewFileSystemService%3E)
- [func \(f \*FileSystemService\) CreateFolder\(ctx context.Context, path string, opts ...func\(\*options.CreateFolder\)\) error](https://www.daytona.io/docs/%3C.md#FileSystemService.CreateFolder%3E)
- [func \(f \*FileSystemService\) DeleteFile\(ctx context.Context, path string, recursive bool\) error](https://www.daytona.io/docs/%3C.md#FileSystemService.DeleteFile%3E)
- [func \(f \*FileSystemService\) DownloadFile\(ctx context.Context, remotePath string, localPath \*string\) \(\[\]byte, error\)](<#FileSystemService.DownloadFile>)
- [func \(f \*FileSystemService\) FindFiles\(ctx context.Context, path, pattern string\) \(any, error\)](https://www.daytona.io/docs/%3C.md#FileSystemService.FindFiles%3E)
- [func \(f \*FileSystemService\) GetFileInfo\(ctx context.Context, path string\) \(\*types.FileInfo, error\)](https://www.daytona.io/docs/%3C.md#FileSystemService.GetFileInfo%3E)
- [func \(f \*FileSystemService\) ListFiles\(ctx context.Context, path string\) \(\[\]\*types.FileInfo, error\)](<#FileSystemService.ListFiles>)
- [func \(f \*FileSystemService\) MoveFiles\(ctx context.Context, source, destination string\) error](https://www.daytona.io/docs/%3C.md#FileSystemService.MoveFiles%3E)
- [func \(f \*FileSystemService\) ReplaceInFiles\(ctx context.Context, files \[\]string, pattern, newValue string\) \(any, error\)](<#FileSystemService.ReplaceInFiles>)
- [func \(f \*FileSystemService\) SearchFiles\(ctx context.Context, path, pattern string\) \(any, error\)](https://www.daytona.io/docs/%3C.md#FileSystemService.SearchFiles%3E)
- [func \(f \*FileSystemService\) SetFilePermissions\(ctx context.Context, path string, opts ...func\(\*options.SetFilePermissions\)\) error](https://www.daytona.io/docs/%3C.md#FileSystemService.SetFilePermissions%3E)
- [func \(f \*FileSystemService\) UploadFile\(ctx context.Context, source any, destination string\) error](https://www.daytona.io/docs/%3C.md#FileSystemService.UploadFile%3E)
- [type GitService](https://www.daytona.io/docs/%3C.md#GitService%3E)
- [func NewGitService\(toolboxClient \*toolbox.APIClient, otel \*otelState\) \*GitService](https://www.daytona.io/docs/%3C.md#NewGitService%3E)
- [func \(g \*GitService\) Add\(ctx context.Context, path string, files \[\]string\) error](<#GitService.Add>)
- [func \(g \*GitService\) Branches\(ctx context.Context, path string\) \(\[\]string, error\)](<#GitService.Branches>)
- [func \(g \*GitService\) Checkout\(ctx context.Context, path, name string\) error](https://www.daytona.io/docs/%3C.md#GitService.Checkout%3E)
- [func \(g \*GitService\) Clone\(ctx context.Context, url, path string, opts ...func\(\*options.GitClone\)\) error](https://www.daytona.io/docs/%3C.md#GitService.Clone%3E)
- [func \(g \*GitService\) Commit\(ctx context.Context, path, message, author, email string, opts ...func\(\*options.GitCommit\)\) \(\*types.GitCommitResponse, error\)](https://www.daytona.io/docs/%3C.md#GitService.Commit%3E)
- [func \(g \*GitService\) CreateBranch\(ctx context.Context, path, name string\) error](https://www.daytona.io/docs/%3C.md#GitService.CreateBranch%3E)
- [func \(g \*GitService\) DeleteBranch\(ctx context.Context, path, name string, opts ...func\(\*options.GitDeleteBranch\)\) error](https://www.daytona.io/docs/%3C.md#GitService.DeleteBranch%3E)
- [func \(g \*GitService\) Pull\(ctx context.Context, path string, opts ...func\(\*options.GitPull\)\) error](https://www.daytona.io/docs/%3C.md#GitService.Pull%3E)
- [func \(g \*GitService\) Push\(ctx context.Context, path string, opts ...func\(\*options.GitPush\)\) error](https://www.daytona.io/docs/%3C.md#GitService.Push%3E)
- [func \(g \*GitService\) Status\(ctx context.Context, path string\) \(\*types.GitStatus, error\)](https://www.daytona.io/docs/%3C.md#GitService.Status%3E)
- [type KeyboardService](https://www.daytona.io/docs/%3C.md#KeyboardService%3E)
- [func NewKeyboardService\(toolboxClient \*toolbox.APIClient, otel \*otelState\) \*KeyboardService](https://www.daytona.io/docs/%3C.md#NewKeyboardService%3E)
- [func \(k \*KeyboardService\) Hotkey\(ctx context.Context, keys string\) error](https://www.daytona.io/docs/%3C.md#KeyboardService.Hotkey%3E)
- [func \(k \*KeyboardService\) Press\(ctx context.Context, key string, modifiers \[\]string\) error](<#KeyboardService.Press>)
- [func \(k \*KeyboardService\) Type\(ctx context.Context, text string, delay \*int\) error](https://www.daytona.io/docs/%3C.md#KeyboardService.Type%3E)
- [type LspServerService](https://www.daytona.io/docs/%3C.md#LspServerService%3E)
- [func NewLspServerService\(toolboxClient \*toolbox.APIClient, languageID types.LspLanguageID, projectPath string, otel \*otelState\) \*LspServerService](https://www.daytona.io/docs/%3C.md#NewLspServerService%3E)
- [func \(l \*LspServerService\) Completions\(ctx context.Context, path string, position types.Position\) \(any, error\)](https://www.daytona.io/docs/%3C.md#LspServerService.Completions%3E)
- [func \(l \*LspServerService\) DidClose\(ctx context.Context, path string\) error](https://www.daytona.io/docs/%3C.md#LspServerService.DidClose%3E)
- [func \(l \*LspServerService\) DidOpen\(ctx context.Context, path string\) error](https://www.daytona.io/docs/%3C.md#LspServerService.DidOpen%3E)
- [func \(l \*LspServerService\) DocumentSymbols\(ctx context.Context, path string\) \(\[\]any, error\)](<#LspServerService.DocumentSymbols>)
- [func \(l \*LspServerService\) SandboxSymbols\(ctx context.Context, query string\) \(\[\]any, error\)](<#LspServerService.SandboxSymbols>)
- [func \(l \*LspServerService\) Start\(ctx context.Context\) error](https://www.daytona.io/docs/%3C.md#LspServerService.Start%3E)
- [func \(l \*LspServerService\) Stop\(ctx context.Context\) error](https://www.daytona.io/docs/%3C.md#LspServerService.Stop%3E)
- [type MouseService](https://www.daytona.io/docs/%3C.md#MouseService%3E)
- [func NewMouseService\(toolboxClient \*toolbox.APIClient, otel \*otelState\) \*MouseService](https://www.daytona.io/docs/%3C.md#NewMouseService%3E)
- [func \(m \*MouseService\) Click\(ctx context.Context, x, y int, button \*string, double \*bool\) \(map\[string\]any, error\)](<#MouseService.Click>)
- [func \(m \*MouseService\) Drag\(ctx context.Context, startX, startY, endX, endY int, button \*string\) \(map\[string\]any, error\)](<#MouseService.Drag>)
- [func \(m \*MouseService\) GetPosition\(ctx context.Context\) \(map\[string\]any, error\)](<#MouseService.GetPosition>)
- [func \(m \*MouseService\) Move\(ctx context.Context, x, y int\) \(map\[string\]any, error\)](<#MouseService.Move>)
- [func \(m \*MouseService\) Scroll\(ctx context.Context, x, y int, direction string, amount \*int\) \(bool, error\)](https://www.daytona.io/docs/%3C.md#MouseService.Scroll%3E)
- [type OutputChannels](https://www.daytona.io/docs/%3C.md#OutputChannels%3E)
- [type PaginatedSandboxes](https://www.daytona.io/docs/%3C.md#PaginatedSandboxes%3E)
- [type ProcessService](https://www.daytona.io/docs/%3C.md#ProcessService%3E)
- [func NewProcessService\(toolboxClient \*toolbox.APIClient, otel \*otelState, language types.CodeLanguage\) \*ProcessService](https://www.daytona.io/docs/%3C.md#NewProcessService%3E)
- [func \(p \*ProcessService\) CodeRun\(ctx context.Context, code string, opts ...func\(\*options.CodeRun\)\) \(\*types.ExecuteResponse, error\)](https://www.daytona.io/docs/%3C.md#ProcessService.CodeRun%3E)
- [func \(p \*ProcessService\) ConnectPty\(ctx context.Context, sessionID string\) \(\*PtyHandle, error\)](https://www.daytona.io/docs/%3C.md#ProcessService.ConnectPty%3E)
- [func \(p \*ProcessService\) CreatePty\(ctx context.Context, id string, opts ...func\(\*options.CreatePty\)\) \(\*PtyHandle, error\)](https://www.daytona.io/docs/%3C.md#ProcessService.CreatePty%3E)
- [func \(p \*ProcessService\) CreatePtySession\(ctx context.Context, id string, opts ...func\(\*options.PtySession\)\) \(\*types.PtySessionInfo, error\)](https://www.daytona.io/docs/%3C.md#ProcessService.CreatePtySession%3E)
- [func \(p \*ProcessService\) CreateSession\(ctx context.Context, sessionID string\) error](https://www.daytona.io/docs/%3C.md#ProcessService.CreateSession%3E)
- [func \(p \*ProcessService\) DeleteSession\(ctx context.Context, sessionID string\) error](https://www.daytona.io/docs/%3C.md#ProcessService.DeleteSession%3E)
- [func \(p \*ProcessService\) ExecuteCommand\(ctx context.Context, command string, opts ...func\(\*options.ExecuteCommand\)\) \(\*types.ExecuteResponse, error\)](https://www.daytona.io/docs/%3C.md#ProcessService.ExecuteCommand%3E)
- [func \(p \*ProcessService\) ExecuteSessionCommand\(ctx context.Context, sessionID, command string, runAsync bool, suppressInputEcho bool\) \(map\[string\]any, error\)](<#ProcessService.ExecuteSessionCommand>)
- [func \(p \*ProcessService\) GetEntrypointLogs\(ctx context.Context\) \(\*toolbox.SessionCommandLogsResponse, error\)](https://www.daytona.io/docs/%3C.md#ProcessService.GetEntrypointLogs%3E)
- [func \(p \*ProcessService\) GetEntrypointLogsStream\(ctx context.Context, stdout, stderr chan\<\- string\) error](https://www.daytona.io/docs/%3C.md#ProcessService.GetEntrypointLogsStream%3E)
- [func \(p \*ProcessService\) GetEntrypointSession\(ctx context.Context\) \(\*toolbox.Session, error\)](https://www.daytona.io/docs/%3C.md#ProcessService.GetEntrypointSession%3E)
- [func \(p \*ProcessService\) GetPtySessionInfo\(ctx context.Context, sessionID string\) \(\*types.PtySessionInfo, error\)](https://www.daytona.io/docs/%3C.md#ProcessService.GetPtySessionInfo%3E)
- [func \(p \*ProcessService\) GetSession\(ctx context.Context, sessionID string\) \(map\[string\]any, error\)](<#ProcessService.GetSession>)
- [func \(p \*ProcessService\) GetSessionCommand\(ctx context.Context, sessionID, commandID string\) \(map\[string\]any, error\)](<#ProcessService.GetSessionCommand>)
- [func \(p \*ProcessService\) GetSessionCommandLogs\(ctx context.Context, sessionID, commandID string\) \(\*toolbox.SessionCommandLogsResponse, error\)](https://www.daytona.io/docs/%3C.md#ProcessService.GetSessionCommandLogs%3E)
- [func \(p \*ProcessService\) GetSessionCommandLogsStream\(ctx context.Context, sessionID, commandID string, stdout, stderr chan\<\- string\) error](https://www.daytona.io/docs/%3C.md#ProcessService.GetSessionCommandLogsStream%3E)
- [func \(p \*ProcessService\) KillPtySession\(ctx context.Context, sessionID string\) error](https://www.daytona.io/docs/%3C.md#ProcessService.KillPtySession%3E)
- [func \(p \*ProcessService\) ListPtySessions\(ctx context.Context\) \(\[\]\*types.PtySessionInfo, error\)](<#ProcessService.ListPtySessions>)
- [func \(p \*ProcessService\) ListSessions\(ctx context.Context\) \(\[\]map\[string\]any, error\)](<#ProcessService.ListSessions>)
- [func \(p \*ProcessService\) ResizePtySession\(ctx context.Context, sessionID string, ptySize types.PtySize\) \(\*types.PtySessionInfo, error\)](https://www.daytona.io/docs/%3C.md#ProcessService.ResizePtySession%3E)
- [type PtyHandle](https://www.daytona.io/docs/%3C.md#PtyHandle%3E)
- [func \(h \*PtyHandle\) DataChan\(\) \<\-chan \[\]byte](<#PtyHandle.DataChan>)
- [func \(h \*PtyHandle\) Disconnect\(\) error](https://www.daytona.io/docs/%3C.md#PtyHandle.Disconnect%3E)
- [func \(h \*PtyHandle\) Error\(\) \*string](https://www.daytona.io/docs/%3C.md#PtyHandle.Error%3E)
- [func \(h \*PtyHandle\) ExitCode\(\) \*int](https://www.daytona.io/docs/%3C.md#PtyHandle.ExitCode%3E)
- [func \(h \*PtyHandle\) IsConnected\(\) bool](https://www.daytona.io/docs/%3C.md#PtyHandle.IsConnected%3E)
- [func \(h \*PtyHandle\) Kill\(ctx context.Context\) error](https://www.daytona.io/docs/%3C.md#PtyHandle.Kill%3E)
- [func \(h \*PtyHandle\) Read\(p \[\]byte\) \(n int, err error\)](<#PtyHandle.Read>)
- [func \(h \*PtyHandle\) Resize\(ctx context.Context, cols, rows int\) \(\*types.PtySessionInfo, error\)](https://www.daytona.io/docs/%3C.md#PtyHandle.Resize%3E)
- [func \(h \*PtyHandle\) SendInput\(data \[\]byte\) error](<#PtyHandle.SendInput>)
- [func \(h \*PtyHandle\) SessionID\(\) string](https://www.daytona.io/docs/%3C.md#PtyHandle.SessionID%3E)
- [func \(h \*PtyHandle\) Wait\(ctx context.Context\) \(\*types.PtyResult, error\)](https://www.daytona.io/docs/%3C.md#PtyHandle.Wait%3E)
- [func \(h \*PtyHandle\) WaitForConnection\(ctx context.Context\) error](https://www.daytona.io/docs/%3C.md#PtyHandle.WaitForConnection%3E)
- [func \(h \*PtyHandle\) Write\(p \[\]byte\) \(n int, err error\)](<#PtyHandle.Write>)
- [type PushAccessCredentials](https://www.daytona.io/docs/%3C.md#PushAccessCredentials%3E)
- [type RecordingService](https://www.daytona.io/docs/%3C.md#RecordingService%3E)
- [func NewRecordingService\(toolboxClient \*toolbox.APIClient\) \*RecordingService](https://www.daytona.io/docs/%3C.md#NewRecordingService%3E)
- [func \(r \*RecordingService\) Delete\(ctx context.Context, id string\) error](https://www.daytona.io/docs/%3C.md#RecordingService.Delete%3E)
- [func \(r \*RecordingService\) Download\(ctx context.Context, id string, localPath string\) error](https://www.daytona.io/docs/%3C.md#RecordingService.Download%3E)
- [func \(r \*RecordingService\) Get\(ctx context.Context, id string\) \(\*toolbox.Recording, error\)](https://www.daytona.io/docs/%3C.md#RecordingService.Get%3E)
- [func \(r \*RecordingService\) List\(ctx context.Context\) \(\*toolbox.ListRecordingsResponse, error\)](https://www.daytona.io/docs/%3C.md#RecordingService.List%3E)
- [func \(r \*RecordingService\) Start\(ctx context.Context, label \*string\) \(\*toolbox.Recording, error\)](https://www.daytona.io/docs/%3C.md#RecordingService.Start%3E)
- [func \(r \*RecordingService\) Stop\(ctx context.Context, id string\) \(\*toolbox.Recording, error\)](https://www.daytona.io/docs/%3C.md#RecordingService.Stop%3E)
- [type Sandbox](https://www.daytona.io/docs/%3C.md#Sandbox%3E)
- [func NewSandbox\(client \*Client, toolboxClient \*toolbox.APIClient, id string, name string, state apiclient.SandboxState, target string, autoArchiveInterval int, autoDeleteInterval int, networkBlockAll bool, networkAllowList \*string, language types.CodeLanguage\) \*Sandbox](https://www.daytona.io/docs/%3C.md#NewSandbox%3E)
- [func \(s \*Sandbox\) Archive\(ctx context.Context\) error](https://www.daytona.io/docs/%3C.md#Sandbox.Archive%3E)
- [func \(s \*Sandbox\) Delete\(ctx context.Context\) error](https://www.daytona.io/docs/%3C.md#Sandbox.Delete%3E)
- [func \(s \*Sandbox\) DeleteWithTimeout\(ctx context.Context, timeout time.Duration\) error](https://www.daytona.io/docs/%3C.md#Sandbox.DeleteWithTimeout%3E)
- [func \(s \*Sandbox\) ExperimentalCreateSnapshot\(ctx context.Context, name string\) error](https://www.daytona.io/docs/%3C.md#Sandbox.ExperimentalCreateSnapshot%3E)
- [func \(s \*Sandbox\) ExperimentalCreateSnapshotWithTimeout\(ctx context.Context, name string, timeout time.Duration\) error](https://www.daytona.io/docs/%3C.md#Sandbox.ExperimentalCreateSnapshotWithTimeout%3E)
- [func \(s \*Sandbox\) ExperimentalFork\(ctx context.Context, name \*string\) \(\*Sandbox, error\)](https://www.daytona.io/docs/%3C.md#Sandbox.ExperimentalFork%3E)
- [func \(s \*Sandbox\) ExperimentalForkWithTimeout\(ctx context.Context, name \*string, timeout time.Duration\) \(\*Sandbox, error\)](https://www.daytona.io/docs/%3C.md#Sandbox.ExperimentalForkWithTimeout%3E)
- [func \(s \*Sandbox\) ExpireSignedPreviewLink\(ctx context.Context, port int, token string\) error](https://www.daytona.io/docs/%3C.md#Sandbox.ExpireSignedPreviewLink%3E)
- [func \(s \*Sandbox\) GetPreviewLink\(ctx context.Context, port int\) \(\*types.PreviewLink, error\)](https://www.daytona.io/docs/%3C.md#Sandbox.GetPreviewLink%3E)
- [func \(s \*Sandbox\) GetSignedPreviewLink\(ctx context.Context, port int, expiresInSeconds int\) \(\*types.SignedPreviewLink, error\)](https://www.daytona.io/docs/%3C.md#Sandbox.GetSignedPreviewLink%3E)
- [func \(s \*Sandbox\) GetUserHomeDir\(ctx context.Context\) \(string, error\)](https://www.daytona.io/docs/%3C.md#Sandbox.GetUserHomeDir%3E)
- [func \(s \*Sandbox\) GetWorkingDir\(ctx context.Context\) \(string, error\)](https://www.daytona.io/docs/%3C.md#Sandbox.GetWorkingDir%3E)
- [func \(s \*Sandbox\) RefreshData\(ctx context.Context\) error](https://www.daytona.io/docs/%3C.md#Sandbox.RefreshData%3E)
- [func \(s \*Sandbox\) Resize\(ctx context.Context, resources \*types.Resources\) error](https://www.daytona.io/docs/%3C.md#Sandbox.Resize%3E)
- [func \(s \*Sandbox\) ResizeWithTimeout\(ctx context.Context, resources \*types.Resources, timeout time.Duration\) error](https://www.daytona.io/docs/%3C.md#Sandbox.ResizeWithTimeout%3E)
- [func \(s \*Sandbox\) SetAutoArchiveInterval\(ctx context.Context, intervalMinutes \*int\) error](https://www.daytona.io/docs/%3C.md#Sandbox.SetAutoArchiveInterval%3E)
- [func \(s \*Sandbox\) SetAutoDeleteInterval\(ctx context.Context, intervalMinutes \*int\) error](https://www.daytona.io/docs/%3C.md#Sandbox.SetAutoDeleteInterval%3E)
- [func \(s \*Sandbox\) SetLabels\(ctx context.Context, labels map\[string\]string\) error](<#Sandbox.SetLabels>)
- [func \(s \*Sandbox\) Start\(ctx context.Context\) error](https://www.daytona.io/docs/%3C.md#Sandbox.Start%3E)
- [func \(s \*Sandbox\) StartWithTimeout\(ctx context.Context, timeout time.Duration\) error](https://www.daytona.io/docs/%3C.md#Sandbox.StartWithTimeout%3E)
- [func \(s \*Sandbox\) Stop\(ctx context.Context\) error](https://www.daytona.io/docs/%3C.md#Sandbox.Stop%3E)
- [func \(s \*Sandbox\) StopWithTimeout\(ctx context.Context, timeout time.Duration, force bool\) error](https://www.daytona.io/docs/%3C.md#Sandbox.StopWithTimeout%3E)
- [func \(s \*Sandbox\) WaitForResize\(ctx context.Context, timeout time.Duration\) error](https://www.daytona.io/docs/%3C.md#Sandbox.WaitForResize%3E)
- [func \(s \*Sandbox\) WaitForStart\(ctx context.Context, timeout time.Duration\) error](https://www.daytona.io/docs/%3C.md#Sandbox.WaitForStart%3E)
- [func \(s \*Sandbox\) WaitForStop\(ctx context.Context, timeout time.Duration\) error](https://www.daytona.io/docs/%3C.md#Sandbox.WaitForStop%3E)
- [type ScreenshotService](https://www.daytona.io/docs/%3C.md#ScreenshotService%3E)
- [func NewScreenshotService\(toolboxClient \*toolbox.APIClient, otel \*otelState\) \*ScreenshotService](https://www.daytona.io/docs/%3C.md#NewScreenshotService%3E)
- [func \(s \*ScreenshotService\) TakeFullScreen\(ctx context.Context, showCursor \*bool\) \(\*types.ScreenshotResponse, error\)](https://www.daytona.io/docs/%3C.md#ScreenshotService.TakeFullScreen%3E)
- [func \(s \*ScreenshotService\) TakeRegion\(ctx context.Context, region types.ScreenshotRegion, showCursor \*bool\) \(\*types.ScreenshotResponse, error\)](https://www.daytona.io/docs/%3C.md#ScreenshotService.TakeRegion%3E)
- [type SnapshotService](https://www.daytona.io/docs/%3C.md#SnapshotService%3E)
- [func NewSnapshotService\(client \*Client\) \*SnapshotService](https://www.daytona.io/docs/%3C.md#NewSnapshotService%3E)
- [func \(s \*SnapshotService\) Create\(ctx context.Context, params \*types.CreateSnapshotParams\) \(\*types.Snapshot, \<\-chan string, error\)](https://www.daytona.io/docs/%3C.md#SnapshotService.Create%3E)
- [func \(s \*SnapshotService\) Delete\(ctx context.Context, snapshot \*types.Snapshot\) error](https://www.daytona.io/docs/%3C.md#SnapshotService.Delete%3E)
- [func \(s \*SnapshotService\) Get\(ctx context.Context, nameOrID string\) \(\*types.Snapshot, error\)](https://www.daytona.io/docs/%3C.md#SnapshotService.Get%3E)
- [func \(s \*SnapshotService\) List\(ctx context.Context, page \*int, limit \*int\) \(\*types.PaginatedSnapshots, error\)](https://www.daytona.io/docs/%3C.md#SnapshotService.List%3E)
- [type VolumeService](https://www.daytona.io/docs/%3C.md#VolumeService%3E)
- [func NewVolumeService\(client \*Client\) \*VolumeService](https://www.daytona.io/docs/%3C.md#NewVolumeService%3E)
- [func \(v \*VolumeService\) Create\(ctx context.Context, name string\) \(\*types.Volume, error\)](https://www.daytona.io/docs/%3C.md#VolumeService.Create%3E)
- [func \(v \*VolumeService\) Delete\(ctx context.Context, volume \*types.Volume\) error](https://www.daytona.io/docs/%3C.md#VolumeService.Delete%3E)
- [func \(v \*VolumeService\) Get\(ctx context.Context, name string\) \(\*types.Volume, error\)](https://www.daytona.io/docs/%3C.md#VolumeService.Get%3E)
- [func \(v \*VolumeService\) List\(ctx context.Context\) \(\[\]\*types.Volume, error\)](<#VolumeService.List>)
- [func \(v \*VolumeService\) WaitForReady\(ctx context.Context, volume \*types.Volume, timeout time.Duration\) \(\*types.Volume, error\)](https://www.daytona.io/docs/%3C.md#VolumeService.WaitForReady%3E)
## Variables
Version is the semantic version of the Daytona SDK.
This value is embedded at build time from the VERSION file.
Example:
```
fmt.Printf("Daytona SDK version: %s\n", daytona.Version)
```
```go
var Version = strings.TrimSpace(version)
```
## type Client
Client is the main entry point for interacting with the Daytona platform.
Client provides methods to create, retrieve, list, and manage sandboxes. It handles authentication, API communication, and provides access to services like Volume and Snapshot management.
Create a Client using [NewClient](https://www.daytona.io/docs/%3C.md#NewClient%3E) or [NewClientWithConfig](https://www.daytona.io/docs/%3C.md#NewClientWithConfig%3E):
```
client, err := daytona.NewClient()
if err != nil {
log.Fatal(err)
}
```
The Client is safe for concurrent use by multiple goroutines.
```go
type Client struct {
// Otel holds OpenTelemetry state; nil when OTel is disabled.
Otel *otelState
// Volume provides methods for managing persistent volumes.
Volume *VolumeService
// Snapshot provides methods for managing sandbox snapshots.
Snapshot *SnapshotService
// contains filtered or unexported fields
}
```
### func NewClient
```go
func NewClient() (*Client, error)
```
NewClient creates a new Daytona client with default configuration.
NewClient reads configuration from environment variables:
- DAYTONA\_API\_KEY or DAYTONA\_JWT\_TOKEN for authentication \(one is required\)
- DAYTONA\_ORGANIZATION\_ID \(required when using JWT token\)
- DAYTONA\_API\_URL for custom API endpoint
- DAYTONA\_TARGET for target environment
For explicit configuration, use [NewClientWithConfig](https://www.daytona.io/docs/%3C.md#NewClientWithConfig%3E) instead.
### func NewClientWithConfig
```go
func NewClientWithConfig(config *types.DaytonaConfig) (*Client, error)
```
NewClientWithConfig creates a new Daytona client with a custom configuration.
Configuration values provided in config take precedence over environment variables. Any configuration field left empty will fall back to the corresponding environment variable \(see [NewClient](https://www.daytona.io/docs/%3C.md#NewClient%3E) for the list of supported variables\).
Example:
```
client, err := daytona.NewClientWithConfig(&types.DaytonaConfig{
APIKey: "your-api-key",
APIUrl: "https://custom.daytona.io/api",
OrganizationID: "org-123",
})
if err != nil {
log.Fatal(err)
}
```
Returns an error if neither API key nor JWT token is provided, or if JWT token is provided without an organization ID.
### func \(\*Client\) Close
```go
func (c *Client) Close(ctx context.Context) error
```
Close shuts down the client and releases resources. When OpenTelemetry is enabled, Close flushes and shuts down the OTel providers. It is safe to call Close even when OTel is not enabled.
### func \(\*Client\) Create
```go
func (c *Client) Create(ctx context.Context, params any, opts ...func(*options.CreateSandbox)) (*Sandbox, error)
```
Create creates a new sandbox with the specified parameters.
The params argument accepts either \[types.SnapshotParams\] to create from a snapshot, or \[types.ImageParams\] to create from a Docker image:
```
// Create from a snapshot
sandbox, err := client.Create(ctx, types.SnapshotParams{
Snapshot: "my-snapshot",
SandboxBaseParams: types.SandboxBaseParams{
Name: "my-sandbox",
},
})
// Create from a Docker image
sandbox, err := client.Create(ctx, types.ImageParams{
Image: "python:3.11",
Resources: &types.Resources{
CPU: 2,
Memory: 4096,
},
})
```
By default, Create waits for the sandbox to reach the started state before returning. Use \[options.WithWaitForStart\]\(false\) to return immediately after creation.
Optional parameters can be configured using functional options:
- \[options.WithTimeout\]: Set maximum wait time for creation
- \[options.WithWaitForStart\]: Control whether to wait for started state
- \[options.WithLogChannel\]: Receive build logs during image builds
Returns the created [Sandbox](https://www.daytona.io/docs/%3C.md#Sandbox%3E) or an error if creation fails.
### func \(\*Client\) Get
```go
func (c *Client) Get(ctx context.Context, sandboxIDOrName string) (*Sandbox, error)
```
Get retrieves an existing sandbox by its ID or name.
The sandboxIDOrName parameter accepts either the sandbox's unique ID or its human\-readable name. If a sandbox with the given identifier is not found, a [errors.DaytonaNotFoundError](https://www.daytona.io/docs/%3Chttps:/pkg.go.dev/errors.md#DaytonaNotFoundError%3E) is returned.
Example:
```
sandbox, err := client.Get(ctx, "my-sandbox")
if err != nil {
var notFound *errors.DaytonaNotFoundError
if errors.As(err, ¬Found) {
log.Println("Sandbox not found")
}
return err
}
```
### func \(\*Client\) List
```go
func (c *Client) List(ctx context.Context, labels map[string]string, page *int, limit *int) (*PaginatedSandboxes, error)
```
List retrieves sandboxes with optional label filtering and pagination.
Parameters:
- labels: Optional map of labels to filter sandboxes. Pass nil for no filtering.
- page: Optional page number \(1\-indexed\). Pass nil for the first page.
- limit: Optional number of results per page. Pass nil for the default limit.
Example:
```
// List all sandboxes
result, err := client.List(ctx, nil, nil, nil)
// List sandboxes with pagination
page, limit := 1, 10
result, err := client.List(ctx, nil, &page, &limit)
// Filter by labels
result, err := client.List(ctx, map[string]string{"env": "dev"}, nil, nil)
// Iterate through results
for _, sandbox := range result.Items {
fmt.Printf("Sandbox: %s (state: %s)\n", sandbox.Name, sandbox.State)
}
```
Returns a [PaginatedSandboxes](https://www.daytona.io/docs/%3C.md#PaginatedSandboxes%3E) containing the matching sandboxes and pagination metadata.
## type CodeInterpreterService
CodeInterpreterService provides Python code execution capabilities for a sandbox.
CodeInterpreterService enables running Python code in isolated execution contexts with support for streaming output, persistent state, and environment variables. It uses WebSockets for real\-time output streaming. Access through \[Sandbox.CodeInterpreter\].
Example:
```
// Simple code execution
channels, err := sandbox.CodeInterpreter.RunCode(ctx, "print('Hello, World!')")
if err != nil {
return err
}
// Wait for completion and get result
result := <-channels.Done
fmt.Println(result.Stdout)
// With persistent context
ctxInfo, _ := sandbox.CodeInterpreter.CreateContext(ctx, nil)
contextID := ctxInfo["id"].(string)
channels, _ = sandbox.CodeInterpreter.RunCode(ctx, "x = 42",
options.WithCustomContext(contextID),
)
<-channels.Done
channels, _ = sandbox.CodeInterpreter.RunCode(ctx, "print(x)",
options.WithCustomContext(contextID),
)
```
```go
type CodeInterpreterService struct {
// contains filtered or unexported fields
}
```
### func NewCodeInterpreterService
```go
func NewCodeInterpreterService(toolboxClient *toolbox.APIClient, otel *otelState) *CodeInterpreterService
```
NewCodeInterpreterService creates a new CodeInterpreterService.
This is typically called internally by the SDK when creating a [Sandbox](https://www.daytona.io/docs/%3C.md#Sandbox%3E). Users should access CodeInterpreterService through \[Sandbox.CodeInterpreter\] rather than creating it directly.
### func \(\*CodeInterpreterService\) CreateContext
```go
func (c *CodeInterpreterService) CreateContext(ctx context.Context, cwd *string) (map[string]any, error)
```
CreateContext creates an isolated execution context for persistent state.
Contexts allow you to maintain state \(variables, imports, etc.\) across multiple code executions. Without a context, each RunCode call starts fresh.
Parameters:
- cwd: Optional working directory for the context
Example:
```
// Create a context
ctxInfo, err := sandbox.CodeInterpreter.CreateContext(ctx, nil)
if err != nil {
return err
}
contextID := ctxInfo["id"].(string)
// Use the context to maintain state
sandbox.CodeInterpreter.RunCode(ctx, "x = 42", options.WithCustomContext(contextID))
sandbox.CodeInterpreter.RunCode(ctx, "print(x)", options.WithCustomContext(contextID)) // prints 42
// Clean up when done
sandbox.CodeInterpreter.DeleteContext(ctx, contextID)
```
Returns context information including "id", "cwd", "language", "active", and "createdAt".
### func \(\*CodeInterpreterService\) DeleteContext
```go
func (c *CodeInterpreterService) DeleteContext(ctx context.Context, contextID string) error
```
DeleteContext removes an execution context and releases its resources.
Parameters:
- contextID: The context identifier to delete
Example:
```
err := sandbox.CodeInterpreter.DeleteContext(ctx, contextID)
```
Returns an error if the context doesn't exist or deletion fails.
### func \(\*CodeInterpreterService\) ListContexts
```go
func (c *CodeInterpreterService) ListContexts(ctx context.Context) ([]map[string]any, error)
```
ListContexts returns all active execution contexts.
Example:
```
contexts, err := sandbox.CodeInterpreter.ListContexts(ctx)
if err != nil {
return err
}
for _, ctx := range contexts {
fmt.Printf("Context %s (language: %s)\n", ctx["id"], ctx["language"])
}
```
Returns a slice of context information maps.
### func \(\*CodeInterpreterService\) RunCode
```go
func (c *CodeInterpreterService) RunCode(ctx context.Context, code string, opts ...func(*options.RunCode)) (*OutputChannels, error)
```
RunCode executes Python code and returns channels for streaming output.
This method establishes a WebSocket connection to execute code asynchronously, streaming stdout and stderr as they become available.
Optional parameters can be configured using functional options:
- \[options.WithCustomContext\]: Use a persistent context for state
- \[options.WithEnv\]: Set environment variables
- \[options.WithInterpreterTimeout\]: Set execution timeout
Example:
```
// Basic execution
channels, err := sandbox.CodeInterpreter.RunCode(ctx, `
for i in range(5):
print(f"Count: {i}")
`)
if err != nil {
return err
}
// Stream output
for msg := range channels.Stdout {
fmt.Print(msg.Text)
}
// Get final result
result := <-channels.Done
if result.Error != nil {
fmt.Printf("Error: %s\n", result.Error.Value)
}
// With options
channels, err := sandbox.CodeInterpreter.RunCode(ctx, "import os; print(os.environ['API_KEY'])",
options.WithEnv(map[string]string{"API_KEY": "secret"}),
options.WithInterpreterTimeout(30*time.Second),
)
```
Returns [OutputChannels](https://www.daytona.io/docs/%3C.md#OutputChannels%3E) for receiving streamed output, or an error if connection fails.
## type ComputerUseService
ComputerUseService provides desktop automation operations for a sandbox.
ComputerUseService enables GUI automation including mouse control, keyboard input, screenshots, display management, and screen recording. The desktop environment must be started before using these features. Access through \[Sandbox.ComputerUse\].
Example:
```
cu := sandbox.ComputerUse
// Start the desktop environment
if err := cu.Start(ctx); err != nil {
return err
}
defer cu.Stop(ctx)
// Take a screenshot
screenshot, err := cu.Screenshot().TakeFullScreen(ctx, nil)
if err != nil {
return err
}
// Click at coordinates
cu.Mouse().Click(ctx, 100, 200, nil, nil)
// Type text
cu.Keyboard().Type(ctx, "Hello, World!", nil)
```
```go
type ComputerUseService struct {
// contains filtered or unexported fields
}
```
### func NewComputerUseService
```go
func NewComputerUseService(toolboxClient *toolbox.APIClient, otel *otelState) *ComputerUseService
```
NewComputerUseService creates a new ComputerUseService.
This is typically called internally by the SDK when creating a [Sandbox](https://www.daytona.io/docs/%3C.md#Sandbox%3E). Users should access ComputerUseService through \[Sandbox.ComputerUse\] rather than creating it directly.
### func \(\*ComputerUseService\) Display
```go
func (c *ComputerUseService) Display() *DisplayService
```
Display returns the [DisplayService](https://www.daytona.io/docs/%3C.md#DisplayService%3E) for display information.
The service is lazily initialized on first access.
### func \(\*ComputerUseService\) GetStatus
```go
func (c *ComputerUseService) GetStatus(ctx context.Context) (map[string]any, error)
```
GetStatus returns the current status of the desktop environment.
Example:
```
status, err := cu.GetStatus(ctx)
if err != nil {
return err
}
fmt.Printf("Desktop status: %v\n", status["status"])
```
Returns a map containing status information.
### func \(\*ComputerUseService\) Keyboard
```go
func (c *ComputerUseService) Keyboard() *KeyboardService
```
Keyboard returns the [KeyboardService](https://www.daytona.io/docs/%3C.md#KeyboardService%3E) for keyboard operations.
The service is lazily initialized on first access.
### func \(\*ComputerUseService\) Mouse
```go
func (c *ComputerUseService) Mouse() *MouseService
```
Mouse returns the [MouseService](https://www.daytona.io/docs/%3C.md#MouseService%3E) for mouse operations.
The service is lazily initialized on first access.
### func \(\*ComputerUseService\) Recording
```go
func (c *ComputerUseService) Recording() *RecordingService
```
Recording returns the [RecordingService](https://www.daytona.io/docs/%3C.md#RecordingService%3E) for screen recording operations.
The service is lazily initialized on first access.
### func \(\*ComputerUseService\) Screenshot
```go
func (c *ComputerUseService) Screenshot() *ScreenshotService
```
Screenshot returns the [ScreenshotService](https://www.daytona.io/docs/%3C.md#ScreenshotService%3E) for capturing screen images.
The service is lazily initialized on first access.
### func \(\*ComputerUseService\) Start
```go
func (c *ComputerUseService) Start(ctx context.Context) error
```
Start initializes and starts the desktop environment.
The desktop environment must be started before using mouse, keyboard, or screenshot operations. Call [ComputerUseService.Stop](https://www.daytona.io/docs/%3C.md#ComputerUseService.Stop%3E) when finished.
Example:
```
if err := cu.Start(ctx); err != nil {
return err
}
defer cu.Stop(ctx)
```
Returns an error if the desktop fails to start.
### func \(\*ComputerUseService\) Stop
```go
func (c *ComputerUseService) Stop(ctx context.Context) error
```
Stop shuts down the desktop environment and releases resources.
Example:
```
err := cu.Stop(ctx)
```
Returns an error if the desktop fails to stop gracefully.
## type DisplayService
DisplayService provides display information and window management operations.
DisplayService enables querying display configuration and window information. Access through [ComputerUseService.Display](https://www.daytona.io/docs/%3C.md#ComputerUseService.Display%3E).
```go
type DisplayService struct {
// contains filtered or unexported fields
}
```
### func NewDisplayService
```go
func NewDisplayService(toolboxClient *toolbox.APIClient, otel *otelState) *DisplayService
```
NewDisplayService creates a new DisplayService.
### func \(\*DisplayService\) GetInfo
```go
func (d *DisplayService) GetInfo(ctx context.Context) (map[string]any, error)
```
GetInfo returns information about connected displays.
Example:
```
info, err := display.GetInfo(ctx)
if err != nil {
return err
}
displays := info["displays"]
fmt.Printf("Connected displays: %v\n", displays)
```
Returns a map containing display information.
### func \(\*DisplayService\) GetWindows
```go
func (d *DisplayService) GetWindows(ctx context.Context) (map[string]any, error)
```
GetWindows returns information about open windows.
Example:
```
result, err := display.GetWindows(ctx)
if err != nil {
return err
}
windows := result["windows"]
fmt.Printf("Open windows: %v\n", windows)
```
Returns a map containing window information.
## type DockerImage
DockerImage provides a fluent interface for building Docker images declaratively.
DockerImage allows you to define Docker images using Go code instead of Dockerfiles. Methods can be chained to build up the image definition, which is then converted to a Dockerfile when used with [SnapshotService.Create](https://www.daytona.io/docs/%3C.md#SnapshotService.Create%3E).
Example:
```
// Create a Python image with dependencies
image := daytona.Base("python:3.11-slim").
AptGet([]string{"git", "curl"}).
PipInstall([]string{"numpy", "pandas"}).
Workdir("/app").
Env("PYTHONUNBUFFERED", "1")
// Use with snapshot creation
snapshot, logChan, err := client.Snapshots.Create(ctx, &types.CreateSnapshotParams{
Name: "my-python-env",
DockerImage: image,
})
```
```go
type DockerImage struct {
// contains filtered or unexported fields
}
```
### func Base
```go
func Base(baseImage string) *DockerImage
```
Base creates a new Image from a base Docker image.
This is typically the starting point for building an image definition. The baseImage parameter is any valid Docker image reference.
Example:
```
image := daytona.Base("ubuntu:22.04")
image := daytona.Base("python:3.11-slim")
image := daytona.Base("node:18-alpine")
```
### func DebianSlim
```go
func DebianSlim(pythonVersion *string) *DockerImage
```
DebianSlim creates a Python image based on Debian slim.
This is a convenience function for creating Python environments. If pythonVersion is nil, defaults to Python 3.12.
Example:
```
// Use default Python 3.12
image := daytona.DebianSlim(nil)
// Use specific version
version := "3.10"
image := daytona.DebianSlim(&version)
```
### func FromDockerfile
```go
func FromDockerfile(dockerfile string) *DockerImage
```
FromDockerfile creates an Image from an existing Dockerfile string.
Use this when you have an existing Dockerfile you want to use.
Example:
```
dockerfile := `FROM python:3.11
RUN pip install numpy
WORKDIR /app`
image := daytona.FromDockerfile(dockerfile)
```
### func \(\*DockerImage\) Add
```go
func (img *DockerImage) Add(source, destination string) *DockerImage
```
Add adds an ADD instruction to the image.
ADD supports URLs and automatic tar extraction. For simple file copying, prefer [DockerImage.Copy](https://www.daytona.io/docs/%3C.md#DockerImage.Copy%3E).
Example:
```
image := daytona.Base("ubuntu:22.04").
Add("https://example.com/app.tar.gz", "/app/")
```
### func \(\*DockerImage\) AddLocalDir
```go
func (img *DockerImage) AddLocalDir(localPath, remotePath string) *DockerImage
```
AddLocalDir adds a local directory to the build context and copies it to the image.
The directory is uploaded to object storage and included in the Docker build context.
Example:
```
image := daytona.Base("python:3.11").
AddLocalDir("./src", "/app/src")
```
### func \(\*DockerImage\) AddLocalFile
```go
func (img *DockerImage) AddLocalFile(localPath, remotePath string) *DockerImage
```
AddLocalFile adds a local file to the build context and copies it to the image.
The file is uploaded to object storage and included in the Docker build context.
Example:
```
image := daytona.Base("python:3.11").
AddLocalFile("./requirements.txt", "/app/requirements.txt").
Run("pip install -r /app/requirements.txt")
```
### func \(\*DockerImage\) AptGet
```go
func (img *DockerImage) AptGet(packages []string) *DockerImage
```
AptGet adds an apt\-get install instruction for system packages.
This automatically handles updating the package list and cleaning up afterward to minimize image size.
Example:
```
image := daytona.Base("ubuntu:22.04").AptGet([]string{"git", "curl", "build-essential"})
```
### func \(\*DockerImage\) Cmd
```go
func (img *DockerImage) Cmd(cmd []string) *DockerImage
```
Cmd sets the default command for the image.
If an entrypoint is set, the cmd provides default arguments to it.
Example:
```
image := daytona.Base("python:3.11").
Cmd([]string{"python", "app.py"})
```
### func \(\*DockerImage\) Contexts
```go
func (img *DockerImage) Contexts() []DockerImageContext
```
Contexts returns the build contexts for local files/directories.
This is called internally when creating snapshots to upload local files.
### func \(\*DockerImage\) Copy
```go
func (img *DockerImage) Copy(source, destination string) *DockerImage
```
Copy adds a COPY instruction to copy files into the image.
For local files, use [DockerImage.AddLocalFile](https://www.daytona.io/docs/%3C.md#DockerImage.AddLocalFile%3E) instead, which handles uploading to the build context.
Example:
```
image := daytona.Base("python:3.11").
Copy("requirements.txt", "/app/requirements.txt")
```
### func \(\*DockerImage\) Dockerfile
```go
func (img *DockerImage) Dockerfile() string
```
Dockerfile returns the generated Dockerfile content.
This is called internally when creating snapshots.
Example:
```
image := daytona.Base("python:3.11").PipInstall([]string{"numpy"})
fmt.Println(image.Dockerfile())
// Output:
// FROM python:3.11
// RUN pip install numpy
```
### func \(\*DockerImage\) Entrypoint
```go
func (img *DockerImage) Entrypoint(cmd []string) *DockerImage
```
Entrypoint sets the entrypoint for the image.
The cmd parameter is the command and arguments as a slice.
Example:
```
image := daytona.Base("python:3.11").
Entrypoint([]string{"python", "-m", "myapp"})
```
### func \(\*DockerImage\) Env
```go
func (img *DockerImage) Env(key, value string) *DockerImage
```
Env sets an environment variable in the image.
Example:
```
image := daytona.Base("python:3.11").
Env("PYTHONUNBUFFERED", "1").
Env("APP_ENV", "production")
```
### func \(\*DockerImage\) Expose
```go
func (img *DockerImage) Expose(ports []int) *DockerImage
```
Expose declares ports that the container listens on.
This is documentation for users and tools; it doesn't actually publish ports.
Example:
```
image := daytona.Base("python:3.11").
Expose([]int{8080, 8443})
```
### func \(\*DockerImage\) Label
```go
func (img *DockerImage) Label(key, value string) *DockerImage
```
Label adds metadata to the image.
Example:
```
image := daytona.Base("python:3.11").
Label("maintainer", "team@example.com").
Label("version", "1.0.0")
```
### func \(\*DockerImage\) PipInstall
```go
func (img *DockerImage) PipInstall(packages []string, opts ...func(*options.PipInstall)) *DockerImage
```
PipInstall adds a pip install instruction for Python packages.
Optional parameters can be configured using functional options:
- \[options.WithFindLinks\]: Add find\-links URLs
- \[options.WithIndexURL\]: Set custom PyPI index
- \[options.WithExtraIndexURLs\]: Add extra index URLs
- \[options.WithPre\]: Allow pre\-release versions
- \[options.WithExtraOptions\]: Add additional pip options
Example:
```
// Basic installation
image := daytona.Base("python:3.11").PipInstall([]string{"numpy", "pandas"})
// With options
image := daytona.Base("python:3.11").PipInstall(
[]string{"torch"},
options.WithIndexURL("https://download.pytorch.org/whl/cpu"),
options.WithExtraOptions("--no-cache-dir"),
)
```
### func \(\*DockerImage\) Run
```go
func (img *DockerImage) Run(command string) *DockerImage
```
Run adds a RUN instruction to execute a shell command.
Example:
```
image := daytona.Base("ubuntu:22.04").
Run("mkdir -p /app/data").
Run("chmod 755 /app")
```
### func \(\*DockerImage\) User
```go
func (img *DockerImage) User(username string) *DockerImage
```
User sets the user for subsequent instructions and container runtime.
Example:
```
image := daytona.Base("python:3.11").
Run("useradd -m appuser").
User("appuser").
Workdir("/home/appuser")
```
### func \(\*DockerImage\) Volume
```go
func (img *DockerImage) Volume(paths []string) *DockerImage
```
Volume declares mount points for the container.
Example:
```
image := daytona.Base("python:3.11").
Volume([]string{"/data", "/logs"})
```
### func \(\*DockerImage\) Workdir
```go
func (img *DockerImage) Workdir(path string) *DockerImage
```
Workdir sets the working directory for subsequent instructions.
Example:
```
image := daytona.Base("python:3.11").
Workdir("/app").
Run("pip install -r requirements.txt")
```
## type DockerImageContext
DockerImageContext represents a local file or directory to include in the build context.
When using [DockerImage.AddLocalFile](https://www.daytona.io/docs/%3C.md#DockerImage.AddLocalFile%3E) or [DockerImage.AddLocalDir](https://www.daytona.io/docs/%3C.md#DockerImage.AddLocalDir%3E), the file/directory is uploaded to object storage and included in the Docker build context.
```go
type DockerImageContext struct {
SourcePath string // Local path to the file or directory
ArchivePath string // Path within the build context archive
}
```
## type FileSystemService
FileSystemService provides file system operations for a sandbox.
FileSystemService enables file and directory management including creating, reading, writing, moving, and deleting files. It also supports file searching and permission management. Access through \[Sandbox.FileSystem\].
Example:
```
// List files in a directory
files, err := sandbox.FileSystem.ListFiles(ctx, "/home/user")
// Create a directory
err = sandbox.FileSystem.CreateFolder(ctx, "/home/user/mydir")
// Upload a file
err = sandbox.FileSystem.UploadFile(ctx, "/local/path/file.txt", "/home/user/file.txt")
// Download a file
data, err := sandbox.FileSystem.DownloadFile(ctx, "/home/user/file.txt", nil)
```
```go
type FileSystemService struct {
// contains filtered or unexported fields
}
```
### func NewFileSystemService
```go
func NewFileSystemService(toolboxClient *toolbox.APIClient, otel *otelState) *FileSystemService
```
NewFileSystemService creates a new FileSystemService with the provided toolbox client.
This is typically called internally by the SDK when creating a [Sandbox](https://www.daytona.io/docs/%3C.md#Sandbox%3E). Users should access FileSystemService through \[Sandbox.FileSystem\] rather than creating it directly.
### func \(\*FileSystemService\) CreateFolder
```go
func (f *FileSystemService) CreateFolder(ctx context.Context, path string, opts ...func(*options.CreateFolder)) error
```
CreateFolder creates a directory at the specified path.
The path parameter specifies the absolute path for the new directory. Parent directories are created automatically if they don't exist.
Optional parameters can be configured using functional options:
- \[options.WithMode\]: Set Unix file permissions \(defaults to "0755"\)
Example:
```
// Create with default permissions
err := sandbox.FileSystem.CreateFolder(ctx, "/home/user/mydir")
// Create with custom permissions
err := sandbox.FileSystem.CreateFolder(ctx, "/home/user/private",
options.WithMode("0700"),
)
```
Returns an error if the directory creation fails.
### func \(\*FileSystemService\) DeleteFile
```go
func (f *FileSystemService) DeleteFile(ctx context.Context, path string, recursive bool) error
```
DeleteFile deletes a file or directory.
Parameters:
- path: The file or directory path to delete
- recursive: If true, delete directories and their contents recursively
Example:
```
// Delete a file
err := sandbox.FileSystem.DeleteFile(ctx, "/home/user/file.txt", false)
// Delete a directory recursively
err := sandbox.FileSystem.DeleteFile(ctx, "/home/user/mydir", true)
```
Returns an error if the deletion fails \(e.g., path doesn't exist, permission denied, or attempting to delete a non\-empty directory without recursive=true\).
### func \(\*FileSystemService\) DownloadFile
```go
func (f *FileSystemService) DownloadFile(ctx context.Context, remotePath string, localPath *string) ([]byte, error)
```
DownloadFile downloads a file from the sandbox.
Parameters:
- remotePath: The path to the file in the sandbox
- localPath: Optional local path to save the file. If nil, only returns the data.
Returns the file contents as a byte slice. If localPath is provided, also writes the contents to that local file.
Example:
```
// Download and get contents
data, err := sandbox.FileSystem.DownloadFile(ctx, "/home/user/file.txt", nil)
fmt.Println(string(data))
// Download and save to local file
localPath := "/tmp/downloaded.txt"
data, err := sandbox.FileSystem.DownloadFile(ctx, "/home/user/file.txt", &localPath)
```
Returns an error if the file doesn't exist or cannot be read.
### func \(\*FileSystemService\) FindFiles
```go
func (f *FileSystemService) FindFiles(ctx context.Context, path, pattern string) (any, error)
```
FindFiles searches for text content within files.
Parameters:
- path: The directory to search in
- pattern: The text pattern to search for \(supports regex\)
Returns a list of matches, each containing the file path, line number, and matching content.
Example:
```
result, err := sandbox.FileSystem.FindFiles(ctx, "/home/user/project", "TODO:")
if err != nil {
return err
}
matches := result.([]map[string]any)
for _, match := range matches {
fmt.Printf("%s:%d: %s\n", match["file"], match["line"], match["content"])
}
```
Returns an error if the search fails.
### func \(\*FileSystemService\) GetFileInfo
```go
func (f *FileSystemService) GetFileInfo(ctx context.Context, path string) (*types.FileInfo, error)
```
GetFileInfo retrieves metadata for a file or directory.
The path parameter specifies the file or directory path.
Returns \[types.FileInfo\] containing the file's name, size, permissions, modification time, and whether it's a directory.
Example:
```
info, err := sandbox.FileSystem.GetFileInfo(ctx, "/home/user/file.txt")
if err != nil {
return err
}
fmt.Printf("Size: %d bytes, Modified: %s\n", info.Size, info.ModifiedTime)
```
Returns an error if the path doesn't exist.
### func \(\*FileSystemService\) ListFiles
```go
func (f *FileSystemService) ListFiles(ctx context.Context, path string) ([]*types.FileInfo, error)
```
ListFiles lists files and directories in the specified path.
The path parameter specifies the directory to list.
Returns a slice of \[types.FileInfo\] containing metadata for each file and directory, including name, size, permissions, modification time, and whether it's a directory.
Example:
```
files, err := sandbox.FileSystem.ListFiles(ctx, "/home/user")
if err != nil {
return err
}
for _, file := range files {
if file.IsDirectory {
fmt.Printf("[DIR] %s\n", file.Name)
} else {
fmt.Printf("[FILE] %s (%d bytes)\n", file.Name, file.Size)
}
}
```
Returns an error if the path doesn't exist or isn't accessible.
### func \(\*FileSystemService\) MoveFiles
```go
func (f *FileSystemService) MoveFiles(ctx context.Context, source, destination string) error
```
MoveFiles moves or renames a file or directory.
Parameters:
- source: The current path of the file or directory
- destination: The new path for the file or directory
This operation can be used for both moving and renaming:
- Same directory, different name = rename
- Different directory = move
Example:
```
// Rename a file
err := sandbox.FileSystem.MoveFiles(ctx, "/home/user/old.txt", "/home/user/new.txt")
// Move a file to another directory
err := sandbox.FileSystem.MoveFiles(ctx, "/home/user/file.txt", "/home/user/backup/file.txt")
```
Returns an error if the operation fails.
### func \(\*FileSystemService\) ReplaceInFiles
```go
func (f *FileSystemService) ReplaceInFiles(ctx context.Context, files []string, pattern, newValue string) (any, error)
```
ReplaceInFiles replaces text in multiple files.
Parameters:
- files: List of file paths to process
- pattern: The text pattern to search for \(supports regex\)
- newValue: The replacement text
Returns a list of results for each file, indicating success or failure.
Example:
```
files := []string{"/home/user/file1.txt", "/home/user/file2.txt"}
result, err := sandbox.FileSystem.ReplaceInFiles(ctx, files, "oldValue", "newValue")
if err != nil {
return err
}
results := result.([]map[string]any)
for _, r := range results {
if r["success"].(bool) {
fmt.Printf("Updated: %s\n", r["file"])
} else {
fmt.Printf("Failed: %s - %s\n", r["file"], r["error"])
}
}
```
Returns an error if the operation fails entirely.
### func \(\*FileSystemService\) SearchFiles
```go
func (f *FileSystemService) SearchFiles(ctx context.Context, path, pattern string) (any, error)
```
SearchFiles searches for files matching a pattern in a directory.
Parameters:
- path: The directory to search in
- pattern: The glob pattern to match file names \(e.g., "\*.txt", "test\_\*"\)
Returns a map containing a "files" key with a list of matching file paths.
Example:
```
result, err := sandbox.FileSystem.SearchFiles(ctx, "/home/user", "*.go")
if err != nil {
return err
}
files := result.(map[string]any)["files"].([]string)
for _, file := range files {
fmt.Println(file)
}
```
Returns an error if the search fails.
### func \(\*FileSystemService\) SetFilePermissions
```go
func (f *FileSystemService) SetFilePermissions(ctx context.Context, path string, opts ...func(*options.SetFilePermissions)) error
```
SetFilePermissions sets file permissions, owner, and group.
The path parameter specifies the file or directory to modify.
Optional parameters can be configured using functional options:
- \[options.WithPermissionMode\]: Set Unix file permissions \(e.g., "0644"\)
- \[options.WithOwner\]: Set file owner username
- \[options.WithGroup\]: Set file group name
Example:
```
// Set permissions only
err := sandbox.FileSystem.SetFilePermissions(ctx, "/home/user/script.sh",
options.WithPermissionMode("0755"),
)
// Set owner and group
err := sandbox.FileSystem.SetFilePermissions(ctx, "/home/user/file.txt",
options.WithOwner("root"),
options.WithGroup("users"),
)
// Set all at once
err := sandbox.FileSystem.SetFilePermissions(ctx, "/home/user/file.txt",
options.WithPermissionMode("0640"),
options.WithOwner("user"),
options.WithGroup("staff"),
)
```
Returns an error if the operation fails.
### func \(\*FileSystemService\) UploadFile
```go
func (f *FileSystemService) UploadFile(ctx context.Context, source any, destination string) error
```
UploadFile uploads a file to the sandbox.
Parameters:
- source: Either a local file path \(string\) or file contents \(\[\]byte\)
- destination: The destination path in the sandbox
Example:
```
// Upload from local file path
err := sandbox.FileSystem.UploadFile(ctx, "/local/path/file.txt", "/home/user/file.txt")
// Upload from byte slice
content := []byte("Hello, World!")
err := sandbox.FileSystem.UploadFile(ctx, content, "/home/user/hello.txt")
```
Returns an error if the upload fails.
## type GitService
GitService provides Git operations for a sandbox.
GitService enables common Git workflows including cloning repositories, staging and committing changes, managing branches, and syncing with remote repositories. It is accessed through the \[Sandbox.Git\] field.
Example:
```
// Clone a repository
err := sandbox.Git.Clone(ctx, "https://github.com/user/repo.git", "/home/user/repo")
// Make changes and commit
err = sandbox.Git.Add(ctx, "/home/user/repo", []string{"."})
resp, err := sandbox.Git.Commit(ctx, "/home/user/repo", "Initial commit", "John Doe", "john@example.com")
// Push to remote
err = sandbox.Git.Push(ctx, "/home/user/repo",
options.WithPushUsername("username"),
options.WithPushPassword("token"),
)
```
```go
type GitService struct {
// contains filtered or unexported fields
}
```
### func NewGitService
```go
func NewGitService(toolboxClient *toolbox.APIClient, otel *otelState) *GitService
```
NewGitService creates a new GitService with the provided toolbox client.
This is typically called internally by the SDK when creating a [Sandbox](https://www.daytona.io/docs/%3C.md#Sandbox%3E). Users should access GitService through \[Sandbox.Git\] rather than creating it directly.
### func \(\*GitService\) Add
```go
func (g *GitService) Add(ctx context.Context, path string, files []string) error
```
Add stages files for the next commit.
The path parameter specifies the repository directory. The files parameter is a list of file paths \(relative to the repository root\) to stage. Use "." to stage all changes.
Example:
```
// Stage specific files
err := sandbox.Git.Add(ctx, "/home/user/repo", []string{"file1.txt", "src/main.go"})
// Stage all changes
err := sandbox.Git.Add(ctx, "/home/user/repo", []string{"."})
```
Returns an error if the add operation fails.
### func \(\*GitService\) Branches
```go
func (g *GitService) Branches(ctx context.Context, path string) ([]string, error)
```
Branches lists all branches in a Git repository.
The path parameter specifies the repository directory.
Example:
```
branches, err := sandbox.Git.Branches(ctx, "/home/user/repo")
if err != nil {
return err
}
for _, branch := range branches {
fmt.Println(branch)
}
```
Returns a slice of branch names or an error if the operation fails.
### func \(\*GitService\) Checkout
```go
func (g *GitService) Checkout(ctx context.Context, path, name string) error
```
Checkout switches to a different branch or commit.
The path parameter specifies the repository directory. The name parameter specifies the branch name or commit SHA to checkout.
Example:
```
// Switch to a branch
err := sandbox.Git.Checkout(ctx, "/home/user/repo", "develop")
// Checkout a specific commit
err := sandbox.Git.Checkout(ctx, "/home/user/repo", "abc123def")
```
Returns an error if the checkout fails \(e.g., branch doesn't exist, uncommitted changes\).
### func \(\*GitService\) Clone
```go
func (g *GitService) Clone(ctx context.Context, url, path string, opts ...func(*options.GitClone)) error
```
Clone clones a Git repository into the specified path.
The url parameter specifies the repository URL \(HTTPS or SSH format\). The path parameter specifies the destination directory for the cloned repository.
Optional parameters can be configured using functional options:
- \[options.WithBranch\]: Clone a specific branch instead of the default
- \[options.WithCommitId\]: Checkout a specific commit after cloning
- \[options.WithUsername\]: Username for authentication \(HTTPS\)
- \[options.WithPassword\]: Password or token for authentication \(HTTPS\)
Example:
```
// Clone the default branch
err := sandbox.Git.Clone(ctx, "https://github.com/user/repo.git", "/home/user/repo")
// Clone a specific branch with authentication
err := sandbox.Git.Clone(ctx, "https://github.com/user/private-repo.git", "/home/user/repo",
options.WithBranch("develop"),
options.WithUsername("username"),
options.WithPassword("github_token"),
)
// Clone and checkout a specific commit
err := sandbox.Git.Clone(ctx, "https://github.com/user/repo.git", "/home/user/repo",
options.WithCommitId("abc123"),
)
```
Returns an error if the clone operation fails.
### func \(\*GitService\) Commit
```go
func (g *GitService) Commit(ctx context.Context, path, message, author, email string, opts ...func(*options.GitCommit)) (*types.GitCommitResponse, error)
```
Commit creates a new Git commit with the staged changes.
Parameters:
- path: The repository directory
- message: The commit message
- author: The author name for the commit
- email: The author email for the commit
Optional parameters can be configured using functional options:
- \[options.WithAllowEmpty\]: Allow creating commits with no changes
Example:
```
// Create a commit
resp, err := sandbox.Git.Commit(ctx, "/home/user/repo",
"Add new feature",
"John Doe",
"john@example.com",
)
if err != nil {
return err
}
fmt.Printf("Created commit: %s\n", resp.SHA)
// Create an empty commit
resp, err := sandbox.Git.Commit(ctx, "/home/user/repo",
"Empty commit for CI trigger",
"John Doe",
"john@example.com",
options.WithAllowEmpty(true),
)
```
Returns the \[types.GitCommitResponse\] containing the commit SHA, or an error if the commit fails.
### func \(\*GitService\) CreateBranch
```go
func (g *GitService) CreateBranch(ctx context.Context, path, name string) error
```
CreateBranch creates a new branch at the current HEAD.
The path parameter specifies the repository directory. The name parameter specifies the name for the new branch.
Note: This creates the branch but does not switch to it. Use [GitService.Checkout](https://www.daytona.io/docs/%3C.md#GitService.Checkout%3E) to switch to the new branch after creation.
Example:
```
// Create a new branch
err := sandbox.Git.CreateBranch(ctx, "/home/user/repo", "feature/new-feature")
if err != nil {
return err
}
// Switch to the new branch
err = sandbox.Git.Checkout(ctx, "/home/user/repo", "feature/new-feature")
```
Returns an error if the branch creation fails \(e.g., branch already exists\).
### func \(\*GitService\) DeleteBranch
```go
func (g *GitService) DeleteBranch(ctx context.Context, path, name string, opts ...func(*options.GitDeleteBranch)) error
```
DeleteBranch deletes a branch from the repository.
The path parameter specifies the repository directory. The name parameter specifies the branch to delete.
Optional parameters can be configured using functional options:
- \[options.WithForce\]: Force delete the branch even if not fully merged
Note: You cannot delete the currently checked out branch.
Example:
```
// Delete a merged branch
err := sandbox.Git.DeleteBranch(ctx, "/home/user/repo", "feature/old-feature")
// Force delete an unmerged branch
err := sandbox.Git.DeleteBranch(ctx, "/home/user/repo", "feature/abandoned",
options.WithForce(true),
)
```
Returns an error if the deletion fails.
### func \(\*GitService\) Pull
```go
func (g *GitService) Pull(ctx context.Context, path string, opts ...func(*options.GitPull)) error
```
Pull fetches and merges changes from the remote repository.
The path parameter specifies the repository directory.
Optional parameters can be configured using functional options:
- \[options.WithPullUsername\]: Username for authentication
- \[options.WithPullPassword\]: Password or token for authentication
Example:
```
// Pull from a public repository
err := sandbox.Git.Pull(ctx, "/home/user/repo")
// Pull with authentication
err := sandbox.Git.Pull(ctx, "/home/user/repo",
options.WithPullUsername("username"),
options.WithPullPassword("github_token"),
)
```
Returns an error if the pull fails \(e.g., merge conflicts, authentication failure\).
### func \(\*GitService\) Push
```go
func (g *GitService) Push(ctx context.Context, path string, opts ...func(*options.GitPush)) error
```
Push pushes local commits to the remote repository.
The path parameter specifies the repository directory.
Optional parameters can be configured using functional options:
- \[options.WithPushUsername\]: Username for authentication
- \[options.WithPushPassword\]: Password or token for authentication
Example:
```
// Push to a public repository (no auth required)
err := sandbox.Git.Push(ctx, "/home/user/repo")
// Push with authentication
err := sandbox.Git.Push(ctx, "/home/user/repo",
options.WithPushUsername("username"),
options.WithPushPassword("github_token"),
)
```
Returns an error if the push fails \(e.g., authentication failure, remote rejection\).
### func \(\*GitService\) Status
```go
func (g *GitService) Status(ctx context.Context, path string) (*types.GitStatus, error)
```
Status returns the current Git status of a repository.
The path parameter specifies the repository directory to check.
The returned \[types.GitStatus\] contains:
- CurrentBranch: The name of the currently checked out branch
- Ahead: Number of commits ahead of the remote tracking branch
- Behind: Number of commits behind the remote tracking branch
- BranchPublished: Whether the branch has been pushed to remote
- FileStatus: List of files with their staging and working tree status
Example:
```
status, err := sandbox.Git.Status(ctx, "/home/user/repo")
if err != nil {
return err
}
fmt.Printf("On branch %s\n", status.CurrentBranch)
fmt.Printf("Ahead: %d, Behind: %d\n", status.Ahead, status.Behind)
for _, file := range status.FileStatus {
fmt.Printf("%s %s\n", file.Status, file.Path)
}
```
Returns an error if the status operation fails or the path is not a Git repository.
## type KeyboardService
KeyboardService provides keyboard input operations.
KeyboardService enables typing text, pressing keys, and executing keyboard shortcuts. Access through [ComputerUseService.Keyboard](https://www.daytona.io/docs/%3C.md#ComputerUseService.Keyboard%3E).
```go
type KeyboardService struct {
// contains filtered or unexported fields
}
```
### func NewKeyboardService
```go
func NewKeyboardService(toolboxClient *toolbox.APIClient, otel *otelState) *KeyboardService
```
NewKeyboardService creates a new KeyboardService.
### func \(\*KeyboardService\) Hotkey
```go
func (k *KeyboardService) Hotkey(ctx context.Context, keys string) error
```
Hotkey executes a keyboard shortcut.
Parameters:
- keys: The hotkey combination as a string \(e.g., "ctrl\+c", "alt\+tab"\)
Example:
```
// Copy (Ctrl+C)
err := keyboard.Hotkey(ctx, "ctrl+c")
// Paste (Ctrl+V)
err := keyboard.Hotkey(ctx, "ctrl+v")
// Switch windows (Alt+Tab)
err := keyboard.Hotkey(ctx, "alt+tab")
```
Returns an error if the hotkey fails.
### func \(\*KeyboardService\) Press
```go
func (k *KeyboardService) Press(ctx context.Context, key string, modifiers []string) error
```
Press simulates pressing a key with optional modifiers.
Parameters:
- key: The key to press \(e.g., "a", "Enter", "Tab", "F1"\)
- modifiers: Modifier keys to hold \(e.g., "ctrl", "alt", "shift", "meta"\)
Example:
```
// Press Enter
err := keyboard.Press(ctx, "Enter", nil)
// Press Ctrl+S
err := keyboard.Press(ctx, "s", []string{"ctrl"})
// Press Ctrl+Shift+N
err := keyboard.Press(ctx, "n", []string{"ctrl", "shift"})
```
Returns an error if the key press fails.
### func \(\*KeyboardService\) Type
```go
func (k *KeyboardService) Type(ctx context.Context, text string, delay *int) error
```
Type simulates typing the specified text.
Parameters:
- text: The text to type
- delay: Delay in milliseconds between keystrokes, nil for default
Example:
```
// Type text with default speed
err := keyboard.Type(ctx, "Hello, World!", nil)
// Type with custom delay between keystrokes
delay := 50
err := keyboard.Type(ctx, "Slow typing", &delay)
```
Returns an error if typing fails.
## type LspServerService
LspServerService provides Language Server Protocol \(LSP\) operations for a sandbox.
LspServerService enables IDE\-like features such as code completion, symbol search, and document analysis through LSP. The service manages a language server instance for a specific language and project path. Access through \[Sandbox.Lsp\].
Example:
```
// Get LSP service for Python
lsp := sandbox.Lsp(types.LspLanguageIDPython, "/home/user/project")
// Start the language server
if err := lsp.Start(ctx); err != nil {
return err
}
defer lsp.Stop(ctx)
// Open a file for analysis
if err := lsp.DidOpen(ctx, "/home/user/project/main.py"); err != nil {
return err
}
// Get code completions
completions, err := lsp.Completions(ctx, "/home/user/project/main.py",
types.Position{Line: 10, Character: 5})
```
```go
type LspServerService struct {
// contains filtered or unexported fields
}
```
### func NewLspServerService
```go
func NewLspServerService(toolboxClient *toolbox.APIClient, languageID types.LspLanguageID, projectPath string, otel *otelState) *LspServerService
```
NewLspServerService creates a new LspServerService.
This is typically called internally by the SDK through \[Sandbox.Lsp\]. Users should access LspServerService through \[Sandbox.Lsp\] rather than creating it directly.
Parameters:
- toolboxClient: The toolbox API client
- languageID: The language identifier \(e.g., \[types.LspLanguageIDPython\]\)
- projectPath: The root path of the project for LSP analysis
### func \(\*LspServerService\) Completions
```go
func (l *LspServerService) Completions(ctx context.Context, path string, position types.Position) (any, error)
```
Completions returns code completion suggestions at a position.
The file should be opened with [LspServerService.DidOpen](https://www.daytona.io/docs/%3C.md#LspServerService.DidOpen%3E) before requesting completions.
Parameters:
- path: Absolute path to the file
- position: Cursor position \(line and character, 0\-indexed\)
Example:
```
lsp.DidOpen(ctx, "/home/user/project/main.py")
completions, err := lsp.Completions(ctx, "/home/user/project/main.py",
types.Position{Line: 10, Character: 5})
if err != nil {
return err
}
fmt.Printf("Completions: %v\n", completions)
```
Returns completion items or an error.
### func \(\*LspServerService\) DidClose
```go
func (l *LspServerService) DidClose(ctx context.Context, path string) error
```
DidClose notifies the language server that a file was closed.
Call this when you're done working with a file to allow the server to release resources associated with it.
Parameters:
- path: Absolute path to the file
Example:
```
err := lsp.DidClose(ctx, "/home/user/project/main.py")
```
Returns an error if the notification fails.
### func \(\*LspServerService\) DidOpen
```go
func (l *LspServerService) DidOpen(ctx context.Context, path string) error
```
DidOpen notifies the language server that a file was opened.
This should be called before requesting completions or symbols for a file. The path is automatically converted to a file:// URI if needed.
Parameters:
- path: Absolute path to the file
Example:
```
err := lsp.DidOpen(ctx, "/home/user/project/main.py")
```
Returns an error if the notification fails.
### func \(\*LspServerService\) DocumentSymbols
```go
func (l *LspServerService) DocumentSymbols(ctx context.Context, path string) ([]any, error)
```
DocumentSymbols returns all symbols \(functions, classes, variables\) in a document.
Parameters:
- path: Absolute path to the file
Example:
```
symbols, err := lsp.DocumentSymbols(ctx, "/home/user/project/main.py")
if err != nil {
return err
}
for _, sym := range symbols {
fmt.Printf("Symbol: %v\n", sym)
}
```
Returns a slice of symbol information or an error.
### func \(\*LspServerService\) SandboxSymbols
```go
func (l *LspServerService) SandboxSymbols(ctx context.Context, query string) ([]any, error)
```
SandboxSymbols searches for symbols across the entire workspace.
Use this to find symbols \(functions, classes, etc.\) by name across all files in the project.
Parameters:
- query: Search query to match symbol names
Example:
```
symbols, err := lsp.SandboxSymbols(ctx, "MyClass")
if err != nil {
return err
}
for _, sym := range symbols {
fmt.Printf("Found: %v\n", sym)
}
```
Returns a slice of matching symbols or an error.
### func \(\*LspServerService\) Start
```go
func (l *LspServerService) Start(ctx context.Context) error
```
Start initializes and starts the language server.
The language server must be started before using other LSP operations. Call [LspServerService.Stop](https://www.daytona.io/docs/%3C.md#LspServerService.Stop%3E) when finished to release resources.
Example:
```
if err := lsp.Start(ctx); err != nil {
return err
}
defer lsp.Stop(ctx)
```
Returns an error if the server fails to start.
### func \(\*LspServerService\) Stop
```go
func (l *LspServerService) Stop(ctx context.Context) error
```
Stop shuts down the language server and releases resources.
Example:
```
err := lsp.Stop(ctx)
```
Returns an error if the server fails to stop gracefully.
## type MouseService
MouseService provides mouse control operations.
MouseService enables cursor movement, clicking, dragging, and scrolling. Access through [ComputerUseService.Mouse](https://www.daytona.io/docs/%3C.md#ComputerUseService.Mouse%3E).
```go
type MouseService struct {
// contains filtered or unexported fields
}
```
### func NewMouseService
```go
func NewMouseService(toolboxClient *toolbox.APIClient, otel *otelState) *MouseService
```
NewMouseService creates a new MouseService.
### func \(\*MouseService\) Click
```go
func (m *MouseService) Click(ctx context.Context, x, y int, button *string, double *bool) (map[string]any, error)
```
Click performs a mouse click at the specified coordinates.
Parameters:
- x: X coordinate to click
- y: Y coordinate to click
- button: Mouse button \("left", "right", "middle"\), nil for left click
- double: Whether to double\-click, nil for single click
Example:
```
// Single left click
pos, err := mouse.Click(ctx, 100, 200, nil, nil)
// Right click
button := "right"
pos, err := mouse.Click(ctx, 100, 200, &button, nil)
// Double click
doubleClick := true
pos, err := mouse.Click(ctx, 100, 200, nil, &doubleClick)
```
Returns a map with the click "x" and "y" coordinates.
### func \(\*MouseService\) Drag
```go
func (m *MouseService) Drag(ctx context.Context, startX, startY, endX, endY int, button *string) (map[string]any, error)
```
Drag performs a mouse drag operation from start to end coordinates.
Parameters:
- startX, startY: Starting coordinates
- endX, endY: Ending coordinates
- button: Mouse button to use, nil for left button
Example:
```
// Drag from (100, 100) to (300, 300)
pos, err := mouse.Drag(ctx, 100, 100, 300, 300, nil)
```
Returns a map with the final "x" and "y" coordinates.
### func \(\*MouseService\) GetPosition
```go
func (m *MouseService) GetPosition(ctx context.Context) (map[string]any, error)
```
GetPosition returns the current cursor position.
Example:
```
pos, err := mouse.GetPosition(ctx)
if err != nil {
return err
}
fmt.Printf("Cursor at (%v, %v)\n", pos["x"], pos["y"])
```
Returns a map with "x" and "y" coordinates.
### func \(\*MouseService\) Move
```go
func (m *MouseService) Move(ctx context.Context, x, y int) (map[string]any, error)
```
Move moves the cursor to the specified coordinates.
Parameters:
- x: Target X coordinate
- y: Target Y coordinate
Example:
```
pos, err := mouse.Move(ctx, 500, 300)
```
Returns a map with the new "x" and "y" coordinates.
### func \(\*MouseService\) Scroll
```go
func (m *MouseService) Scroll(ctx context.Context, x, y int, direction string, amount *int) (bool, error)
```
Scroll performs a mouse scroll operation at the specified coordinates.
Parameters:
- x, y: Coordinates where the scroll occurs
- direction: Scroll direction \("up", "down", "left", "right"\)
- amount: Scroll amount, nil for default
Example:
```
// Scroll down at position (500, 400)
success, err := mouse.Scroll(ctx, 500, 400, "down", nil)
// Scroll up with specific amount
amount := 5
success, err := mouse.Scroll(ctx, 500, 400, "up", &amount)
```
Returns true if the scroll was successful.
## type OutputChannels
OutputChannels provides channels for streaming execution output.
All channels are closed when execution completes or encounters an error. The Done channel always receives exactly one message with the final result.
```go
type OutputChannels struct {
Stdout <-chan *types.OutputMessage // Receives stdout messages as they occur
Stderr <-chan *types.OutputMessage // Receives stderr messages as they occur
Errors <-chan *types.ExecutionError // Receives execution errors
Done <-chan *types.ExecutionResult // Receives final result when execution completes
}
```
## type PaginatedSandboxes
PaginatedSandboxes represents a paginated list of sandboxes.
```go
type PaginatedSandboxes struct {
Items []*Sandbox // Sandboxes in this page
Total int // Total number of sandboxes
Page int // Current page number
TotalPages int // Total number of pages
}
```
## type ProcessService
ProcessService provides process execution operations for a sandbox.
ProcessService enables command execution, session management, and PTY \(pseudo\-terminal\) operations. It supports both synchronous command execution and interactive terminal sessions. Access through \[Sandbox.Process\].
Example:
```
// Execute a command
result, err := sandbox.Process.ExecuteCommand(ctx, "echo 'Hello, World!'")
fmt.Println(result.Result)
// Execute with options
result, err := sandbox.Process.ExecuteCommand(ctx, "ls -la",
options.WithCwd("/home/user/project"),
options.WithExecuteTimeout(30*time.Second),
)
// Create an interactive PTY session
handle, err := sandbox.Process.CreatePty(ctx, "my-terminal")
defer handle.Disconnect()
```
```go
type ProcessService struct {
// contains filtered or unexported fields
}
```
### func NewProcessService
```go
func NewProcessService(toolboxClient *toolbox.APIClient, otel *otelState, language types.CodeLanguage) *ProcessService
```
NewProcessService creates a new ProcessService with the provided toolbox client.
This is typically called internally by the SDK when creating a [Sandbox](https://www.daytona.io/docs/%3C.md#Sandbox%3E). Users should access ProcessService through \[Sandbox.Process\] rather than creating it directly.
### func \(\*ProcessService\) CodeRun
```go
func (p *ProcessService) CodeRun(ctx context.Context, code string, opts ...func(*options.CodeRun)) (*types.ExecuteResponse, error)
```
CodeRun executes code in a language\-specific runtime and returns the result.
The code is executed directly by the daemon's code\-run endpoint using the specified language runtime \(Python, JavaScript, or TypeScript\). This is different from [ProcessService.ExecuteCommand](https://www.daytona.io/docs/%3C.md#ProcessService.ExecuteCommand%3E) which runs shell commands.
Parameters:
- code: The source code to execute
- language: The language runtime to use \(e.g. \[types.CodeLanguagePython\]\)
Optional parameters can be configured using functional options:
- \[options.WithCodeRunParams\]: Set argv and environment variables
- \[options.WithCodeRunTimeout\]: Set execution timeout
Example:
```
// Run Python code
result, err := sandbox.Process.CodeRun(ctx, "print('Hello')", types.CodeLanguagePython)
fmt.Println(result.Result)
// Run with options
result, err := sandbox.Process.CodeRun(ctx, code, types.CodeLanguagePython,
options.WithCodeRunParams(types.CodeRunParams{
Argv: []string{"--verbose"},
Env: map[string]string{"DEBUG": "1"},
}),
options.WithCodeRunTimeout(30*time.Second),
)
```
Returns \[types.ExecuteResponse\] containing the output, exit code, and any artifacts \(such as charts\), or an error.
### func \(\*ProcessService\) ConnectPty
```go
func (p *ProcessService) ConnectPty(ctx context.Context, sessionID string) (*PtyHandle, error)
```
ConnectPty establishes a WebSocket connection to an existing PTY session.
Returns a [PtyHandle](https://www.daytona.io/docs/%3C.md#PtyHandle%3E) for interacting with the terminal. The handle provides:
- DataChan\(\): Channel for receiving terminal output
- SendInput\(\): Method for sending keyboard input
- Resize\(\): Method for changing terminal size
- Disconnect\(\): Method for closing the connection
Parameters:
- sessionID: The PTY session to connect to
Example:
```
handle, err := sandbox.Process.ConnectPty(ctx, "my-terminal")
if err != nil {
return err
}
defer handle.Disconnect()
// Wait for connection
if err := handle.WaitForConnection(ctx); err != nil {
return err
}
// Read output
for data := range handle.DataChan() {
fmt.Print(string(data))
}
```
Returns a [PtyHandle](https://www.daytona.io/docs/%3C.md#PtyHandle%3E) for terminal interaction, or an error.
### func \(\*ProcessService\) CreatePty
```go
func (p *ProcessService) CreatePty(ctx context.Context, id string, opts ...func(*options.CreatePty)) (*PtyHandle, error)
```
CreatePty creates a new PTY session and immediately connects to it.
This is a convenience method that combines [ProcessService.CreatePtySession](https://www.daytona.io/docs/%3C.md#ProcessService.CreatePtySession%3E) and [ProcessService.ConnectPty](https://www.daytona.io/docs/%3C.md#ProcessService.ConnectPty%3E) into a single operation.
Parameters:
- id: Unique identifier for the PTY session
Optional parameters can be configured using functional options:
- \[options.WithCreatePtySize\]: Set terminal dimensions
- \[options.WithCreatePtyEnv\]: Set environment variables
Example:
```
handle, err := sandbox.Process.CreatePty(ctx, "interactive-shell",
options.WithCreatePtySize(types.PtySize{Rows: 24, Cols: 80}),
options.WithCreatePtyEnv(map[string]string{"TERM": "xterm-256color"}),
)
if err != nil {
return err
}
defer handle.Disconnect()
// Wait for connection
if err := handle.WaitForConnection(ctx); err != nil {
return err
}
// Send a command
handle.SendInput([]byte("ls -la\n"))
// Read output
for data := range handle.DataChan() {
fmt.Print(string(data))
}
```
Returns a [PtyHandle](https://www.daytona.io/docs/%3C.md#PtyHandle%3E) for terminal interaction, or an error.
### func \(\*ProcessService\) CreatePtySession
```go
func (p *ProcessService) CreatePtySession(ctx context.Context, id string, opts ...func(*options.PtySession)) (*types.PtySessionInfo, error)
```
CreatePtySession creates a PTY \(pseudo\-terminal\) session.
A PTY session provides a terminal interface for interactive applications. Use [ProcessService.ConnectPty](https://www.daytona.io/docs/%3C.md#ProcessService.ConnectPty%3E) to connect to the session after creation.
Parameters:
- id: Unique identifier for the session
Optional parameters can be configured using functional options:
- \[options.WithPtySize\]: Set terminal dimensions \(rows and columns\)
- \[options.WithPtyEnv\]: Set environment variables
Example:
```
// Create with default settings
session, err := sandbox.Process.CreatePtySession(ctx, "my-terminal")
// Create with custom size
session, err := sandbox.Process.CreatePtySession(ctx, "my-terminal",
options.WithPtySize(types.PtySize{Rows: 24, Cols: 80}),
)
```
Returns \[types.PtySessionInfo\] containing session details, or an error.
### func \(\*ProcessService\) CreateSession
```go
func (p *ProcessService) CreateSession(ctx context.Context, sessionID string) error
```
CreateSession creates a named session for executing multiple commands.
Sessions allow you to execute multiple commands while maintaining state \(like environment variables and working directory\) between commands.
Example:
```
// Create a session
err := sandbox.Process.CreateSession(ctx, "my-session")
if err != nil {
return err
}
defer sandbox.Process.DeleteSession(ctx, "my-session")
// Execute commands in the session
result, err := sandbox.Process.ExecuteSessionCommand(ctx, "my-session", "cd /home/user", false)
result, err = sandbox.Process.ExecuteSessionCommand(ctx, "my-session", "pwd", false)
```
Returns an error if session creation fails.
### func \(\*ProcessService\) DeleteSession
```go
func (p *ProcessService) DeleteSession(ctx context.Context, sessionID string) error
```
DeleteSession removes a session and releases its resources.
The sessionID parameter identifies the session to delete.
Example:
```
err := sandbox.Process.DeleteSession(ctx, "my-session")
```
Returns an error if the session doesn't exist or deletion fails.
### func \(\*ProcessService\) ExecuteCommand
```go
func (p *ProcessService) ExecuteCommand(ctx context.Context, command string, opts ...func(*options.ExecuteCommand)) (*types.ExecuteResponse, error)
```
ExecuteCommand executes a shell command and returns the result.
The command is executed in a shell context. For complex commands, consider using proper shell escaping or wrapping in a script.
Optional parameters can be configured using functional options:
- \[options.WithCwd\]: Set the working directory for command execution
- \[options.WithCommandEnv\]: Set environment variables
- \[options.WithExecuteTimeout\]: Set execution timeout
Example:
```
// Simple command
result, err := sandbox.Process.ExecuteCommand(ctx, "echo 'Hello'")
if err != nil {
return err
}
fmt.Println(result.Result)
// Command with options
result, err := sandbox.Process.ExecuteCommand(ctx, "npm install",
options.WithCwd("/home/user/project"),
options.WithExecuteTimeout(5*time.Minute),
)
// Check exit code
if result.ExitCode != 0 {
fmt.Printf("Command failed with exit code %d\n", result.ExitCode)
}
```
Returns \[types.ExecuteResponse\] containing the output and exit code, or an error.
### func \(\*ProcessService\) ExecuteSessionCommand
```go
func (p *ProcessService) ExecuteSessionCommand(ctx context.Context, sessionID, command string, runAsync bool, suppressInputEcho bool) (map[string]any, error)
```
ExecuteSessionCommand executes a command within a session.
Parameters:
- sessionID: The session to execute the command in
- command: The command to execute
- runAsync: If true, return immediately without waiting for completion
- suppressInputEcho: If true, suppress input echo
When runAsync is true, use [ProcessService.GetSessionCommand](https://www.daytona.io/docs/%3C.md#ProcessService.GetSessionCommand%3E) to check status and [ProcessService.GetSessionCommandLogs](https://www.daytona.io/docs/%3C.md#ProcessService.GetSessionCommandLogs%3E) to retrieve output.
Example:
```
// Synchronous execution
result, err := sandbox.Process.ExecuteSessionCommand(ctx, "my-session", "ls -la", false)
if err != nil {
return err
}
fmt.Println(result["stdout"])
// Asynchronous execution
result, err := sandbox.Process.ExecuteSessionCommand(ctx, "my-session", "long-running-cmd", true)
cmdID := result["id"].(string)
// Later: check status with GetSessionCommand(ctx, "my-session", cmdID)
```
Returns command result including id, exitCode \(if completed\), stdout, and stderr.
### func \(\*ProcessService\) GetEntrypointLogs
```go
func (p *ProcessService) GetEntrypointLogs(ctx context.Context) (*toolbox.SessionCommandLogsResponse, error)
```
GetEntrypointLogs retrieves the output logs of the sandbox entrypoint.
Example:
```
logs, err := sandbox.Process.GetEntrypointLogs(ctx)
if err != nil {
return err
}
fmt.Println(logs)
```
Returns a string containing the entrypoint command output logs.
### func \(\*ProcessService\) GetEntrypointLogsStream
```go
func (p *ProcessService) GetEntrypointLogsStream(ctx context.Context, stdout, stderr chan<- string) error
```
GetEntrypointLogsStream streams entrypoint logs as they become available.
This method establishes a WebSocket connection to stream sandbox entrypoint logs in real\-time. The stdout and stderr channels receive log chunks as strings and are closed when the stream ends or an error occurs.
Parameters:
- stdout: Channel to receive stdout output
- stderr: Channel to receive stderr output
The caller should provide buffered channels to avoid blocking.
Example:
```
stdout := make(chan string, 100)
stderr := make(chan string, 100)
go func() {
err := sandbox.Process.GetEntrypointLogsStream(ctx, stdout, stderr)
if err != nil {
log.Printf("Stream error: %v", err)
}
}()
for {
select {
case chunk, ok := <-stdout:
if !ok {
stdout = nil
} else {
fmt.Print(chunk)
}
case chunk, ok := <-stderr:
if !ok {
stderr = nil
} else {
fmt.Fprint(os.Stderr, chunk)
}
}
if stdout == nil && stderr == nil {
break
}
}
```
Returns an error if the connection fails or stream encounters an error.
### func \(\*ProcessService\) GetEntrypointSession
```go
func (p *ProcessService) GetEntrypointSession(ctx context.Context) (*toolbox.Session, error)
```
GetEntrypointSession retrieves information about the entrypoint session.
Returns an entrypoint session information containing:
- SessionId: The entrypoint session identifier
- Commands: List of commands executed in the entrypoint session
Example:
```
info, err := sandbox.Process.GetEntrypointSession(ctx)
if err != nil {
return err
}
fmt.Printf("Session: %s\n", info.SessionId)
```
Returns an error if the session doesn't exist.
### func \(\*ProcessService\) GetPtySessionInfo
```go
func (p *ProcessService) GetPtySessionInfo(ctx context.Context, sessionID string) (*types.PtySessionInfo, error)
```
GetPtySessionInfo retrieves information about a PTY session.
Parameters:
- sessionID: The PTY session identifier
Example:
```
info, err := sandbox.Process.GetPtySessionInfo(ctx, "my-terminal")
if err != nil {
return err
}
fmt.Printf("Terminal size: %dx%d\n", info.Cols, info.Rows)
```
Returns \[types.PtySessionInfo\] with session details, or an error.
### func \(\*ProcessService\) GetSession
```go
func (p *ProcessService) GetSession(ctx context.Context, sessionID string) (map[string]any, error)
```
GetSession retrieves information about a session.
The sessionID parameter identifies the session to query.
Returns a map containing:
- sessionId: The session identifier
- commands: List of commands executed in the session
Example:
```
info, err := sandbox.Process.GetSession(ctx, "my-session")
if err != nil {
return err
}
fmt.Printf("Session: %s\n", info["sessionId"])
```
Returns an error if the session doesn't exist.
### func \(\*ProcessService\) GetSessionCommand
```go
func (p *ProcessService) GetSessionCommand(ctx context.Context, sessionID, commandID string) (map[string]any, error)
```
GetSessionCommand retrieves the status of a command in a session.
Parameters:
- sessionID: The session containing the command
- commandID: The command identifier \(from ExecuteSessionCommand result\)
Example:
```
status, err := sandbox.Process.GetSessionCommand(ctx, "my-session", cmdID)
if err != nil {
return err
}
if exitCode, ok := status["exitCode"]; ok {
fmt.Printf("Command completed with exit code: %v\n", exitCode)
} else {
fmt.Println("Command still running")
}
```
Returns command status including id, command text, and exitCode \(if completed\).
### func \(\*ProcessService\) GetSessionCommandLogs
```go
func (p *ProcessService) GetSessionCommandLogs(ctx context.Context, sessionID, commandID string) (*toolbox.SessionCommandLogsResponse, error)
```
GetSessionCommandLogs retrieves the output logs of a command.
Parameters:
- sessionID: The session containing the command
- commandID: The command identifier
Example:
```
logs, err := sandbox.Process.GetSessionCommandLogs(ctx, "my-session", cmdID)
if err != nil {
return err
}
fmt.Println(logs["logs"])
```
Returns a map containing the "logs" key with command output.
### func \(\*ProcessService\) GetSessionCommandLogsStream
```go
func (p *ProcessService) GetSessionCommandLogsStream(ctx context.Context, sessionID, commandID string, stdout, stderr chan<- string) error
```
GetSessionCommandLogsStream streams command logs as they become available.
This method establishes a WebSocket connection to stream logs in real\-time. The stdout and stderr channels receive log chunks as strings and are closed when the stream ends or an error occurs.
Parameters:
- sessionID: The session containing the command
- commandID: The command identifier
- stdout: Channel to receive stdout output
- stderr: Channel to receive stderr output
The caller should provide buffered channels to avoid blocking.
Example:
```
stdout := make(chan string, 100)
stderr := make(chan string, 100)
go func() {
err := sandbox.Process.GetSessionCommandLogsStream(ctx, "session", "cmd", stdout, stderr)
if err != nil {
log.Printf("Stream error: %v", err)
}
}()
for {
select {
case chunk, ok := <-stdout:
if !ok {
stdout = nil
} else {
fmt.Print(chunk)
}
case chunk, ok := <-stderr:
if !ok {
stderr = nil
} else {
fmt.Fprint(os.Stderr, chunk)
}
}
if stdout == nil && stderr == nil {
break
}
}
```
Returns an error if the connection fails or stream encounters an error.
### func \(\*ProcessService\) KillPtySession
```go
func (p *ProcessService) KillPtySession(ctx context.Context, sessionID string) error
```
KillPtySession terminates a PTY session.
This ends the terminal session and any processes running in it.
Parameters:
- sessionID: The PTY session to terminate
Example:
```
err := sandbox.Process.KillPtySession(ctx, "my-terminal")
```
Returns an error if the session doesn't exist or termination fails.
### func \(\*ProcessService\) ListPtySessions
```go
func (p *ProcessService) ListPtySessions(ctx context.Context) ([]*types.PtySessionInfo, error)
```
ListPtySessions returns all active PTY sessions.
Example:
```
sessions, err := sandbox.Process.ListPtySessions(ctx)
if err != nil {
return err
}
for _, session := range sessions {
fmt.Printf("PTY: %s (%dx%d)\n", session.ID, session.Cols, session.Rows)
}
```
Returns a slice of \[types.PtySessionInfo\], or an error.
### func \(\*ProcessService\) ListSessions
```go
func (p *ProcessService) ListSessions(ctx context.Context) ([]map[string]any, error)
```
ListSessions returns all active sessions.
Example:
```
sessions, err := sandbox.Process.ListSessions(ctx)
if err != nil {
return err
}
for _, session := range sessions {
fmt.Printf("Session: %s\n", session["sessionId"])
}
```
Returns a slice of session information maps, or an error.
### func \(\*ProcessService\) ResizePtySession
```go
func (p *ProcessService) ResizePtySession(ctx context.Context, sessionID string, ptySize types.PtySize) (*types.PtySessionInfo, error)
```
ResizePtySession changes the terminal dimensions of a PTY session.
This sends a SIGWINCH signal to applications, notifying them of the size change.
Parameters:
- sessionID: The PTY session to resize
- ptySize: New terminal dimensions
Example:
```
newSize := types.PtySize{Rows: 40, Cols: 120}
info, err := sandbox.Process.ResizePtySession(ctx, "my-terminal", newSize)
if err != nil {
return err
}
fmt.Printf("New size: %dx%d\n", info.Cols, info.Rows)
```
Returns updated \[types.PtySessionInfo\], or an error.
## type PtyHandle
PtyHandle manages a WebSocket connection to a PTY \(pseudo\-terminal\) session.
PtyHandle provides methods for sending input, receiving output via channels, resizing the terminal, and managing the connection lifecycle. It implements [io.Reader](https://www.daytona.io/docs/%3Chttps:/pkg.go.dev/io.md#Reader%3E) and [io.Writer](https://www.daytona.io/docs/%3Chttps:/pkg.go.dev/io.md#Writer%3E) interfaces for integration with standard Go I/O.
Create a PtyHandle using [ProcessService.CreatePty](https://www.daytona.io/docs/%3C.md#ProcessService.CreatePty%3E).
Example:
```
// Create a PTY session
handle, err := sandbox.Process.CreatePty(ctx, "my-pty", nil)
if err != nil {
return err
}
defer handle.Disconnect()
// Wait for connection to be established
if err := handle.WaitForConnection(ctx); err != nil {
return err
}
// Send input
handle.SendInput([]byte("ls -la\n"))
// Read output from channel
for data := range handle.DataChan() {
fmt.Print(string(data))
}
// Or use as io.Reader
io.Copy(os.Stdout, handle)
```
```go
type PtyHandle struct {
// contains filtered or unexported fields
}
```
### func \(\*PtyHandle\) DataChan
```go
func (h *PtyHandle) DataChan() <-chan []byte
```
DataChan returns a channel for receiving PTY output.
The channel receives raw bytes from the terminal. It is closed when the PTY session ends or the connection is closed.
Example:
```
for data := range handle.DataChan() {
fmt.Print(string(data))
}
```
### func \(\*PtyHandle\) Disconnect
```go
func (h *PtyHandle) Disconnect() error
```
Disconnect closes the WebSocket connection and releases resources.
Call this when done with the PTY session. This does not terminate the underlying process \- use [PtyHandle.Kill](https://www.daytona.io/docs/%3C.md#PtyHandle.Kill%3E) for that.
Example:
```
defer handle.Disconnect()
```
Returns an error if the WebSocket close fails.
### func \(\*PtyHandle\) Error
```go
func (h *PtyHandle) Error() *string
```
Error returns the error message if the PTY session failed, or nil otherwise.
### func \(\*PtyHandle\) ExitCode
```go
func (h *PtyHandle) ExitCode() *int
```
ExitCode returns the exit code of the PTY process, or nil if still running.
### func \(\*PtyHandle\) IsConnected
```go
func (h *PtyHandle) IsConnected() bool
```
IsConnected returns true if the WebSocket connection is active.
### func \(\*PtyHandle\) Kill
```go
func (h *PtyHandle) Kill(ctx context.Context) error
```
Kill terminates the PTY session and its associated process.
This operation is irreversible. The process receives a SIGKILL signal and terminates immediately.
Example:
```
err := handle.Kill(ctx)
```
Returns an error if the kill operation fails.
### func \(\*PtyHandle\) Read
```go
func (h *PtyHandle) Read(p []byte) (n int, err error)
```
Read implements [io.Reader](https://www.daytona.io/docs/%3Chttps:/pkg.go.dev/io.md#Reader%3E) for reading PTY output.
This method blocks until data is available or the PTY closes \(returns [io.EOF](https://www.daytona.io/docs/%3Chttps:/pkg.go.dev/io.md#EOF%3E)\). Use with [io.Copy](https://www.daytona.io/docs/%3Chttps:/pkg.go.dev/io.md#Copy%3E), [bufio.Scanner](https://www.daytona.io/docs/%3Chttps:/pkg.go.dev/bufio.md#Scanner%3E), or any standard Go I/O utilities.
Example:
```
// Copy all output to stdout
io.Copy(os.Stdout, handle)
// Use with bufio.Scanner
scanner := bufio.NewScanner(handle)
for scanner.Scan() {
fmt.Println(scanner.Text())
}
```
### func \(\*PtyHandle\) Resize
```go
func (h *PtyHandle) Resize(ctx context.Context, cols, rows int) (*types.PtySessionInfo, error)
```
Resize changes the PTY terminal dimensions.
This notifies terminal applications about the new dimensions via SIGWINCH signal. Call this when the terminal display size changes.
Parameters:
- cols: Number of columns \(width in characters\)
- rows: Number of rows \(height in characters\)
Example:
```
info, err := handle.Resize(ctx, 120, 40)
```
Returns updated \[types.PtySessionInfo\] or an error.
### func \(\*PtyHandle\) SendInput
```go
func (h *PtyHandle) SendInput(data []byte) error
```
SendInput sends input data to the PTY session.
The data is sent as raw bytes and will be processed as if typed in the terminal. Use this to send commands, keystrokes, or any terminal input.
Example:
```
// Send a command
handle.SendInput([]byte("ls -la\n"))
// Send Ctrl+C
handle.SendInput([]byte{0x03})
```
Returns an error if the PTY is not connected or sending fails.
### func \(\*PtyHandle\) SessionID
```go
func (h *PtyHandle) SessionID() string
```
SessionID returns the unique identifier for this PTY session.
### func \(\*PtyHandle\) Wait
```go
func (h *PtyHandle) Wait(ctx context.Context) (*types.PtyResult, error)
```
Wait blocks until the PTY process exits and returns the result.
Example:
```
result, err := handle.Wait(ctx)
if err != nil {
return err
}
if result.ExitCode != nil {
fmt.Printf("Process exited with code: %d\n", *result.ExitCode)
}
```
Returns \[types.PtyResult\] with exit code and any error, or an error if the context is cancelled.
### func \(\*PtyHandle\) WaitForConnection
```go
func (h *PtyHandle) WaitForConnection(ctx context.Context) error
```
WaitForConnection waits for the WebSocket connection to be established.
This method blocks until the PTY session is ready to receive input and send output, or until a timeout \(10 seconds\) expires. Always call this after creating a PTY to ensure the connection is ready.
Example:
```
handle, _ := sandbox.Process.CreatePty(ctx, "my-pty", nil)
if err := handle.WaitForConnection(ctx); err != nil {
return fmt.Errorf("PTY connection failed: %w", err)
}
```
Returns an error if the connection times out or fails.
### func \(\*PtyHandle\) Write
```go
func (h *PtyHandle) Write(p []byte) (n int, err error)
```
Write implements [io.Writer](https://www.daytona.io/docs/%3Chttps:/pkg.go.dev/io.md#Writer%3E) for sending input to the PTY.
Example:
```
// Write directly
handle.Write([]byte("echo hello\n"))
// Use with io.Copy
io.Copy(handle, strings.NewReader("echo hello\n"))
```
## type PushAccessCredentials
PushAccessCredentials holds temporary credentials for uploading to object storage.
These credentials are obtained from the API and used for uploading build contexts when creating snapshots with custom [DockerImage](https://www.daytona.io/docs/%3C.md#DockerImage%3E) definitions.
```go
type PushAccessCredentials struct {
StorageURL string `json:"storageUrl"`
AccessKey string `json:"accessKey"`
Secret string `json:"secret"`
SessionToken string `json:"sessionToken"`
Bucket string `json:"bucket"`
OrganizationID string `json:"organizationId"`
}
```
## type RecordingService
RecordingService provides screen recording operations.
RecordingService enables starting, stopping, and managing screen recordings. Access through [ComputerUseService.Recording](https://www.daytona.io/docs/%3C.md#ComputerUseService.Recording%3E).
```go
type RecordingService struct {
// contains filtered or unexported fields
}
```
### func NewRecordingService
```go
func NewRecordingService(toolboxClient *toolbox.APIClient) *RecordingService
```
NewRecordingService creates a new RecordingService.
### func \(\*RecordingService\) Delete
```go
func (r *RecordingService) Delete(ctx context.Context, id string) error
```
Delete deletes a recording by ID.
Parameters:
- id: The ID of the recording to delete
Example:
```
err := cu.Recording().Delete(ctx, recordingID)
if err != nil {
return err
}
fmt.Println("Recording deleted")
```
Returns an error if the deletion fails.
### func \(\*RecordingService\) Download
```go
func (r *RecordingService) Download(ctx context.Context, id string, localPath string) error
```
Download downloads a recording file and saves it to a local path.
The file is streamed directly to disk without loading the entire content into memory.
Parameters:
- id: The ID of the recording to download
- localPath: Path to save the recording file locally
Example:
```
err := cu.Recording().Download(ctx, recordingID, "local_recording.mp4")
if err != nil {
return err
}
fmt.Println("Recording downloaded")
```
Returns an error if the download fails.
### func \(\*RecordingService\) Get
```go
func (r *RecordingService) Get(ctx context.Context, id string) (*toolbox.Recording, error)
```
Get gets details of a specific recording by ID.
Parameters:
- id: The ID of the recording to retrieve
Example:
```
recording, err := cu.Recording().Get(ctx, recordingID)
if err != nil {
return err
}
fmt.Printf("Recording: %s\n", recording.GetFileName())
fmt.Printf("Status: %s\n", recording.GetStatus())
fmt.Printf("Duration: %v seconds\n", recording.GetDurationSeconds())
```
Returns \[toolbox.Recording\] with recording details.
### func \(\*RecordingService\) List
```go
func (r *RecordingService) List(ctx context.Context) (*toolbox.ListRecordingsResponse, error)
```
List lists all recordings \(active and completed\).
Example:
```
recordings, err := cu.Recording().List(ctx)
if err != nil {
return err
}
fmt.Printf("Found %d recordings\n", len(recordings.GetRecordings()))
for _, rec := range recordings.GetRecordings() {
fmt.Printf("- %s: %s\n", rec.GetFileName(), rec.GetStatus())
}
```
Returns \[toolbox.ListRecordingsResponse\] with all recordings.
### func \(\*RecordingService\) Start
```go
func (r *RecordingService) Start(ctx context.Context, label *string) (*toolbox.Recording, error)
```
Start starts a new screen recording session.
Parameters:
- label: Optional custom label for the recording
Example:
```
// Start a recording with a label
recording, err := cu.Recording().Start(ctx, stringPtr("my-test-recording"))
if err != nil {
return err
}
fmt.Printf("Recording started: %s\n", recording.GetId())
fmt.Printf("File: %s\n", recording.GetFilePath())
```
Returns \[toolbox.Recording\] with recording details.
### func \(\*RecordingService\) Stop
```go
func (r *RecordingService) Stop(ctx context.Context, id string) (*toolbox.Recording, error)
```
Stop stops an active screen recording session.
Parameters:
- id: The ID of the recording to stop
Example:
```
result, err := cu.Recording().Stop(ctx, recordingID)
if err != nil {
return err
}
fmt.Printf("Recording stopped: %v seconds\n", result.GetDurationSeconds())
fmt.Printf("Saved to: %s\n", result.GetFilePath())
```
Returns \[toolbox.Recording\] with recording details.
## type Sandbox
Sandbox represents a Daytona sandbox environment.
A Sandbox provides an isolated development environment with file system, git, process execution, code interpretation, and desktop automation capabilities. Sandboxes can be started, stopped, archived, and deleted.
Access sandbox capabilities through the service fields:
- FileSystem: File and directory operations
- Git: Git repository operations
- Process: Command execution and PTY sessions
- CodeInterpreter: Python code execution
- ComputerUse: Desktop automation \(mouse, keyboard, screenshots\)
Example:
```
// Create and use a sandbox
sandbox, err := client.Create(ctx)
if err != nil {
return err
}
defer sandbox.Delete(ctx)
// Execute a command
result, err := sandbox.Process.ExecuteCommand(ctx, "echo 'Hello'")
// Work with files
err = sandbox.FileSystem.UploadFile(ctx, "local.txt", "/home/user/remote.txt")
```
```go
type Sandbox struct {
ID string // Unique sandbox identifier
Name string // Human-readable sandbox name
State apiclient.SandboxState // Current sandbox state
Target string // Target region/environment where the sandbox runs
ToolboxClient *toolbox.APIClient // Internal API client
// AutoArchiveInterval is the time in minutes after stopping before auto-archiving.
// Set to 0 to disable auto-archiving.
AutoArchiveInterval int
// AutoDeleteInterval is the time in minutes after stopping before auto-deletion.
// Set to -1 to disable auto-deletion.
// Set to 0 to delete immediately upon stopping.
AutoDeleteInterval int
// NetworkBlockAll blocks all network access when true.
NetworkBlockAll bool
// NetworkAllowList is a comma-separated list of allowed CIDR addresses.
NetworkAllowList *string
FileSystem *FileSystemService // File system operations
Git *GitService // Git operations
Process *ProcessService // Process and PTY operations
CodeInterpreter *CodeInterpreterService // Python code execution
ComputerUse *ComputerUseService // Desktop automation
// contains filtered or unexported fields
}
```
### func NewSandbox
```go
func NewSandbox(client *Client, toolboxClient *toolbox.APIClient, id string, name string, state apiclient.SandboxState, target string, autoArchiveInterval int, autoDeleteInterval int, networkBlockAll bool, networkAllowList *string, language types.CodeLanguage) *Sandbox
```
NewSandbox creates a new Sandbox instance.
This is typically called internally by the SDK. Users should create sandboxes using [Client.Create](https://www.daytona.io/docs/%3C.md#Client.Create%3E) rather than calling this directly.
### func \(\*Sandbox\) Archive
```go
func (s *Sandbox) Archive(ctx context.Context) error
```
Archive archives the sandbox, preserving its state in cost\-effective storage.
When sandboxes are archived, the entire filesystem state is moved to object storage, making it possible to keep sandboxes available for extended periods at reduced cost. Use [Sandbox.Start](https://www.daytona.io/docs/%3C.md#Sandbox.Start%3E) to unarchive and resume.
Example:
```
err := sandbox.Archive(ctx)
if err != nil {
return err
}
// Sandbox is now archived and can be restored later
```
### func \(\*Sandbox\) Delete
```go
func (s *Sandbox) Delete(ctx context.Context) error
```
Delete deletes the sandbox with a default timeout of 60 seconds.
This operation is irreversible. All data in the sandbox will be lost. For custom timeout, use [Sandbox.DeleteWithTimeout](https://www.daytona.io/docs/%3C.md#Sandbox.DeleteWithTimeout%3E).
Example:
```
err := sandbox.Delete(ctx)
```
### func \(\*Sandbox\) DeleteWithTimeout
```go
func (s *Sandbox) DeleteWithTimeout(ctx context.Context, timeout time.Duration) error
```
DeleteWithTimeout deletes the sandbox with a custom timeout. 0 means no timeout.
Example:
```
err := sandbox.DeleteWithTimeout(ctx, 2*time.Minute)
```
### func \(\*Sandbox\) ExperimentalCreateSnapshot
```go
func (s *Sandbox) ExperimentalCreateSnapshot(ctx context.Context, name string) error
```
ExperimentalCreateSnapshot creates a snapshot from the current state of the sandbox with a default timeout of 60 seconds.
This captures the sandbox's filesystem into a reusable snapshot that can be used to create new sandboxes. The sandbox will temporarily enter a 'snapshotting' state and return to its previous state when complete.
Example:
```
err := sandbox.ExperimentalCreateSnapshot(ctx, "my-snapshot")
if err != nil {
return err
}
```
### func \(\*Sandbox\) ExperimentalCreateSnapshotWithTimeout
```go
func (s *Sandbox) ExperimentalCreateSnapshotWithTimeout(ctx context.Context, name string, timeout time.Duration) error
```
ExperimentalCreateSnapshotWithTimeout creates a snapshot from the current state of the sandbox with a custom timeout. 0 means no timeout.
Example:
```
err := sandbox.ExperimentalCreateSnapshotWithTimeout(ctx, "my-snapshot", 2*time.Minute)
```
### func \(\*Sandbox\) ExperimentalFork
```go
func (s *Sandbox) ExperimentalFork(ctx context.Context, name *string) (*Sandbox, error)
```
ExperimentalFork forks the sandbox with a default timeout of 60 seconds, creating a new sandbox with an identical filesystem.
The forked sandbox is a copy\-on\-write clone of the original. It starts with the same disk contents but operates independently from that point on. ExperimentalFork waits for the new sandbox to reach the "started" state before returning.
Example:
```
forked, err := sandbox.ExperimentalFork(ctx, nil)
if err != nil {
return err
}
fmt.Printf("Forked sandbox: %s\n", forked.ID)
```
### func \(\*Sandbox\) ExperimentalForkWithTimeout
```go
func (s *Sandbox) ExperimentalForkWithTimeout(ctx context.Context, name *string, timeout time.Duration) (*Sandbox, error)
```
ExperimentalForkWithTimeout forks the sandbox with a custom timeout, creating a new sandbox with an identical filesystem.
The forked sandbox is a copy\-on\-write clone of the original. It starts with the same disk contents but operates independently from that point on. ExperimentalForkWithTimeout waits for the new sandbox to reach the "started" state before returning. 0 means no timeout.
Example:
```
forked, err := sandbox.ExperimentalForkWithTimeout(ctx, nil, 2*time.Minute)
if err != nil {
return err
}
fmt.Printf("Forked sandbox: %s\n", forked.ID)
```
### func \(\*Sandbox\) ExpireSignedPreviewLink
```go
func (s *Sandbox) ExpireSignedPreviewLink(ctx context.Context, port int, token string) error
```
ExpireSignedPreviewLink expires a previously generated signed preview link.
This invalidates the signed preview link token, preventing any further access.
Example:
```
err := sandbox.ExpireSignedPreviewLink(ctx, 3000, "preview-token-to-expire")
if err != nil {
return err
}
```
### func \(\*Sandbox\) GetPreviewLink
```go
func (s *Sandbox) GetPreviewLink(ctx context.Context, port int) (*types.PreviewLink, error)
```
GetPreviewLink returns a preview link for accessing a port on the sandbox.
The returned PreviewLink contains both the URL and an authentication token. For private sandboxes, the token must be sent via the "x\-daytona\-preview\-token" request header.
Example:
```
preview, err := sandbox.GetPreviewLink(ctx, 3000)
if err != nil {
return err
}
fmt.Printf("URL: %s\nToken: %s\n", preview.URL, preview.Token)
```
### func \(\*Sandbox\) GetSignedPreviewLink
```go
func (s *Sandbox) GetSignedPreviewLink(ctx context.Context, port int, expiresInSeconds int) (*types.SignedPreviewLink, error)
```
GetSignedPreviewLink retrieves a signed preview URL for the sandbox at the specified port, valid for up to expiresInSeconds seconds.
Example:
```
preview, err := sandbox.GetSignedPreviewLink(ctx, 3000, 3600)
if err != nil {
return err
}
fmt.Printf("Sandbox ID: %s\nPort: %d\nURL: %s\nToken: %s\n", preview.SandboxID, preview.Port, preview.URL, preview.Token)
```
### func \(\*Sandbox\) GetUserHomeDir
```go
func (s *Sandbox) GetUserHomeDir(ctx context.Context) (string, error)
```
GetUserHomeDir returns the user's home directory path in the sandbox.
Example:
```
homeDir, err := sandbox.GetUserHomeDir(ctx)
if err != nil {
return err
}
fmt.Printf("Home directory: %s\n", homeDir) // e.g., "/home/daytona"
```
### func \(\*Sandbox\) GetWorkingDir
```go
func (s *Sandbox) GetWorkingDir(ctx context.Context) (string, error)
```
GetWorkingDir returns the current working directory in the sandbox.
Example:
```
workDir, err := sandbox.GetWorkingDir(ctx)
if err != nil {
return err
}
fmt.Printf("Working directory: %s\n", workDir)
```
### func \(\*Sandbox\) RefreshData
```go
func (s *Sandbox) RefreshData(ctx context.Context) error
```
RefreshData refreshes the sandbox data from the API.
This updates the sandbox's State and other properties from the server. Useful for checking if the sandbox state has changed.
Example:
```
err := sandbox.RefreshData(ctx)
if err != nil {
return err
}
fmt.Printf("Current state: %s\n", sandbox.State)
```
### func \(\*Sandbox\) Resize
```go
func (s *Sandbox) Resize(ctx context.Context, resources *types.Resources) error
```
Resize resizes the sandbox resources with a default timeout of 60 seconds.
Changes the CPU, memory, or disk allocation for the sandbox. Resizing a started sandbox allows increasing CPU and memory. To resize disk or decrease resources, the sandbox must be stopped first.
Example:
```
// Resize a started sandbox (CPU and memory can be increased)
err := sandbox.Resize(ctx, &types.Resources{CPU: 4, Memory: 8})
// Resize a stopped sandbox (CPU, memory, and disk can be changed)
sandbox.Stop(ctx)
err := sandbox.Resize(ctx, &types.Resources{CPU: 2, Memory: 4, Disk: 30})
```
### func \(\*Sandbox\) ResizeWithTimeout
```go
func (s *Sandbox) ResizeWithTimeout(ctx context.Context, resources *types.Resources, timeout time.Duration) error
```
ResizeWithTimeout resizes the sandbox resources with a custom timeout.
Changes the CPU, memory, or disk allocation for the sandbox. Resizing a started sandbox allows increasing CPU and memory. To resize disk or decrease resources, the sandbox must be stopped first. 0 means no timeout.
Example:
```
err := sandbox.ResizeWithTimeout(ctx, &types.Resources{CPU: 4, Memory: 8}, 2*time.Minute)
```
### func \(\*Sandbox\) SetAutoArchiveInterval
```go
func (s *Sandbox) SetAutoArchiveInterval(ctx context.Context, intervalMinutes *int) error
```
SetAutoArchiveInterval sets the auto\-archive interval in minutes.
The sandbox will be automatically archived after being stopped for this many minutes. Set to 0 to disable auto\-archiving \(sandbox will never auto\-archive\).
Example:
```
// Archive after 30 minutes of being stopped
interval := 30
err := sandbox.SetAutoArchiveInterval(ctx, &interval)
// Disable auto-archiving
interval := 0
err := sandbox.SetAutoArchiveInterval(ctx, &interval)
```
### func \(\*Sandbox\) SetAutoDeleteInterval
```go
func (s *Sandbox) SetAutoDeleteInterval(ctx context.Context, intervalMinutes *int) error
```
SetAutoDeleteInterval sets the auto\-delete interval in minutes.
The sandbox will be automatically deleted after being stopped for this many minutes.
Special values:
- \-1: Disable auto\-deletion \(sandbox will never auto\-delete\)
- 0: Delete immediately upon stopping
Example:
```
// Delete after 60 minutes of being stopped
interval := 60
err := sandbox.SetAutoDeleteInterval(ctx, &interval)
// Delete immediately when stopped
interval := 0
err := sandbox.SetAutoDeleteInterval(ctx, &interval)
// Never auto-delete
interval := -1
err := sandbox.SetAutoDeleteInterval(ctx, &interval)
```
### func \(\*Sandbox\) SetLabels
```go
func (s *Sandbox) SetLabels(ctx context.Context, labels map[string]string) error
```
SetLabels sets custom labels on the sandbox.
Labels are key\-value pairs that can be used for organization and filtering. This replaces all existing labels.
Example:
```
err := sandbox.SetLabels(ctx, map[string]string{
"environment": "development",
"team": "backend",
"project": "api-server",
})
```
### func \(\*Sandbox\) Start
```go
func (s *Sandbox) Start(ctx context.Context) error
```
Start starts the sandbox with a default timeout of 60 seconds.
If the sandbox is already running, this is a no\-op. For custom timeout, use [Sandbox.StartWithTimeout](https://www.daytona.io/docs/%3C.md#Sandbox.StartWithTimeout%3E).
Example:
```
err := sandbox.Start(ctx)
if err != nil {
return err
}
// Sandbox is now running
```
### func \(\*Sandbox\) StartWithTimeout
```go
func (s *Sandbox) StartWithTimeout(ctx context.Context, timeout time.Duration) error
```
StartWithTimeout starts the sandbox with a custom timeout.
The method blocks until the sandbox reaches the "started" state or the timeout is exceeded. 0 means no timeout.
Example:
```
err := sandbox.StartWithTimeout(ctx, 2*time.Minute)
if err != nil {
return err
}
```
### func \(\*Sandbox\) Stop
```go
func (s *Sandbox) Stop(ctx context.Context) error
```
Stop stops the sandbox with a default timeout of 60 seconds.
Stopping a sandbox preserves its state. Use [Sandbox.Start](https://www.daytona.io/docs/%3C.md#Sandbox.Start%3E) to resume. For custom timeout or force stop, use [Sandbox.StopWithTimeout](https://www.daytona.io/docs/%3C.md#Sandbox.StopWithTimeout%3E).
Example:
```
err := sandbox.Stop(ctx)
```
### func \(\*Sandbox\) StopWithTimeout
```go
func (s *Sandbox) StopWithTimeout(ctx context.Context, timeout time.Duration, force bool) error
```
StopWithTimeout stops the sandbox with a custom timeout.
The method blocks until the sandbox reaches the "stopped" state or the timeout is exceeded. 0 means no timeout. Set force to true to use SIGKILL instead of SIGTERM.
Example:
```
err := sandbox.StopWithTimeout(ctx, 2*time.Minute, false)
```
### func \(\*Sandbox\) WaitForResize
```go
func (s *Sandbox) WaitForResize(ctx context.Context, timeout time.Duration) error
```
WaitForResize waits for the sandbox resize operation to complete.
This method polls the sandbox state until it's no longer resizing, encounters an error state, or the timeout is exceeded. 0 means no timeout.
Example:
```
err := sandbox.WaitForResize(ctx, 2*time.Minute)
```
### func \(\*Sandbox\) WaitForStart
```go
func (s *Sandbox) WaitForStart(ctx context.Context, timeout time.Duration) error
```
WaitForStart waits for the sandbox to reach the "started" state.
This method polls the sandbox state until it's started, encounters an error state, or the timeout is exceeded. 0 means no timeout.
Example:
```
err := sandbox.WaitForStart(ctx, 2*time.Minute)
if err != nil {
return err
}
// Sandbox is now running
```
### func \(\*Sandbox\) WaitForStop
```go
func (s *Sandbox) WaitForStop(ctx context.Context, timeout time.Duration) error
```
WaitForStop waits for the sandbox to reach the "stopped" state.
This method polls the sandbox state until it's stopped or the timeout is exceeded. 0 means no timeout.
Example:
```
err := sandbox.WaitForStop(ctx, 2*time.Minute)
```
## type ScreenshotService
ScreenshotService provides screen capture operations.
ScreenshotService enables capturing full screen or region screenshots. Access through [ComputerUseService.Screenshot](https://www.daytona.io/docs/%3C.md#ComputerUseService.Screenshot%3E).
```go
type ScreenshotService struct {
// contains filtered or unexported fields
}
```
### func NewScreenshotService
```go
func NewScreenshotService(toolboxClient *toolbox.APIClient, otel *otelState) *ScreenshotService
```
NewScreenshotService creates a new ScreenshotService.
### func \(\*ScreenshotService\) TakeFullScreen
```go
func (s *ScreenshotService) TakeFullScreen(ctx context.Context, showCursor *bool) (*types.ScreenshotResponse, error)
```
TakeFullScreen captures a screenshot of the entire screen.
Parameters:
- showCursor: Whether to include the cursor in the screenshot, nil for default
Example:
```
// Capture full screen
screenshot, err := ss.TakeFullScreen(ctx, nil)
if err != nil {
return err
}
// screenshot.Image contains the base64-encoded image data
// Capture with cursor visible
showCursor := true
screenshot, err := ss.TakeFullScreen(ctx, &showCursor)
```
Returns \[types.ScreenshotResponse\] with the captured image.
### func \(\*ScreenshotService\) TakeRegion
```go
func (s *ScreenshotService) TakeRegion(ctx context.Context, region types.ScreenshotRegion, showCursor *bool) (*types.ScreenshotResponse, error)
```
TakeRegion captures a screenshot of a specific screen region.
Parameters:
- region: The region to capture \(X, Y, Width, Height\)
- showCursor: Whether to include the cursor in the screenshot, nil for default
Example:
```
// Capture a 200x100 region starting at (50, 50)
region := types.ScreenshotRegion{X: 50, Y: 50, Width: 200, Height: 100}
screenshot, err := ss.TakeRegion(ctx, region, nil)
if err != nil {
return err
}
```
Returns \[types.ScreenshotResponse\] with the captured image.
## type SnapshotService
SnapshotService provides snapshot \(image template\) management operations.
SnapshotService enables creating, managing, and deleting snapshots that serve as templates for sandboxes. Snapshots can be built from Docker images or custom [DockerImage](https://www.daytona.io/docs/%3C.md#DockerImage%3E) definitions with build contexts. Access through \[Client.Snapshots\].
Example:
```
// Create a snapshot from an existing image
snapshot, logChan, err := client.Snapshots.Create(ctx, &types.CreateSnapshotParams{
Name: "my-python-env",
Image: "python:3.11-slim",
})
if err != nil {
return err
}
// Stream build logs
for log := range logChan {
fmt.Println(log)
}
// Create a snapshot from a custom Image definition
image := daytona.Base("python:3.11-slim").
PipInstall([]string{"numpy", "pandas"}).
Workdir("/app")
snapshot, logChan, err := client.Snapshots.Create(ctx, &types.CreateSnapshotParams{
Name: "custom-python-env",
Image: image,
})
```
```go
type SnapshotService struct {
// contains filtered or unexported fields
}
```
### func NewSnapshotService
```go
func NewSnapshotService(client *Client) *SnapshotService
```
NewSnapshotService creates a new SnapshotService.
This is typically called internally by the SDK when creating a [Client](https://www.daytona.io/docs/%3C.md#Client%3E). Users should access SnapshotService through \[Client.Snapshots\] rather than creating it directly.
### func \(\*SnapshotService\) Create
```go
func (s *SnapshotService) Create(ctx context.Context, params *types.CreateSnapshotParams) (*types.Snapshot, <-chan string, error)
```
Create builds a new snapshot from an image and streams build logs.
The image parameter can be either a Docker image reference string \(e.g., "python:3.11"\) or an [DockerImage](https://www.daytona.io/docs/%3C.md#DockerImage%3E) builder object for custom Dockerfile definitions.
Parameters:
- params: Snapshot creation parameters including name, image, resources, and entrypoint
Example:
```
// Create from Docker Hub image
snapshot, logChan, err := client.Snapshots.Create(ctx, &types.CreateSnapshotParams{
Name: "my-env",
Image: "python:3.11-slim",
})
if err != nil {
return err
}
// Stream build logs
for log := range logChan {
fmt.Println(log)
}
// Create with custom image and resources
image := daytona.Base("python:3.11").PipInstall([]string{"numpy"})
snapshot, logChan, err := client.Snapshots.Create(ctx, &types.CreateSnapshotParams{
Name: "custom-env",
Image: image,
Resources: &types.Resources{CPU: 2, Memory: 4096},
})
```
Returns the created \[types.Snapshot\], a channel for streaming build logs, or an error. The log channel is closed when the build completes or fails.
### func \(\*SnapshotService\) Delete
```go
func (s *SnapshotService) Delete(ctx context.Context, snapshot *types.Snapshot) error
```
Delete permanently removes a snapshot.
Sandboxes created from this snapshot will continue to work, but no new sandboxes can be created from it after deletion.
Parameters:
- snapshot: The snapshot to delete
Example:
```
err := client.Snapshots.Delete(ctx, snapshot)
if err != nil {
return err
}
```
Returns an error if deletion fails.
### func \(\*SnapshotService\) Get
```go
func (s *SnapshotService) Get(ctx context.Context, nameOrID string) (*types.Snapshot, error)
```
Get retrieves a snapshot by name or ID.
Parameters:
- nameOrID: The snapshot name or unique ID
Example:
```
snapshot, err := client.Snapshots.Get(ctx, "my-python-env")
if err != nil {
return err
}
fmt.Printf("Snapshot %s: %s\n", snapshot.Name, snapshot.State)
```
Returns the \[types.Snapshot\] or an error if not found.
### func \(\*SnapshotService\) List
```go
func (s *SnapshotService) List(ctx context.Context, page *int, limit *int) (*types.PaginatedSnapshots, error)
```
List returns snapshots with optional pagination.
Parameters:
- page: Page number \(1\-indexed\), nil for first page
- limit: Maximum snapshots per page, nil for default
Example:
```
// List first page with default limit
result, err := client.Snapshots.List(ctx, nil, nil)
if err != nil {
return err
}
// List with pagination
page, limit := 2, 10
result, err := client.Snapshots.List(ctx, &page, &limit)
fmt.Printf("Page %d of %d, total: %d\n", result.Page, result.TotalPages, result.Total)
```
Returns \[types.PaginatedSnapshots\] containing the snapshots and pagination info.
## type VolumeService
VolumeService provides persistent storage volume management operations.
VolumeService enables creating, managing, and deleting persistent storage volumes that can be attached to sandboxes. Volumes persist data independently of sandbox lifecycle and can be shared between sandboxes. Access through \[Client.Volumes\].
Example:
```
// Create a new volume
volume, err := client.Volumes.Create(ctx, "my-data-volume")
if err != nil {
return err
}
// Wait for volume to be ready
volume, err = client.Volumes.WaitForReady(ctx, volume, 60*time.Second)
if err != nil {
return err
}
// List all volumes
volumes, err := client.Volumes.List(ctx)
```
```go
type VolumeService struct {
// contains filtered or unexported fields
}
```
### func NewVolumeService
```go
func NewVolumeService(client *Client) *VolumeService
```
NewVolumeService creates a new VolumeService.
This is typically called internally by the SDK when creating a [Client](https://www.daytona.io/docs/%3C.md#Client%3E). Users should access VolumeService through \[Client.Volumes\] rather than creating it directly.
### func \(\*VolumeService\) Create
```go
func (v *VolumeService) Create(ctx context.Context, name string) (*types.Volume, error)
```
Create creates a new persistent storage volume.
The volume starts in "pending" state and transitions to "ready" when available. Use [VolumeService.WaitForReady](https://www.daytona.io/docs/%3C.md#VolumeService.WaitForReady%3E) to wait for the volume to become ready.
Parameters:
- name: Unique name for the volume
Example:
```
volume, err := client.Volumes.Create(ctx, "my-data-volume")
if err != nil {
return err
}
// Wait for volume to be ready
volume, err = client.Volumes.WaitForReady(ctx, volume, 60*time.Second)
```
Returns the created \[types.Volume\] or an error.
### func \(\*VolumeService\) Delete
```go
func (v *VolumeService) Delete(ctx context.Context, volume *types.Volume) error
```
Delete permanently removes a volume and all its data.
This operation is irreversible. Ensure no sandboxes are using the volume before deletion.
Parameters:
- volume: The volume to delete
Example:
```
err := client.Volumes.Delete(ctx, volume)
if err != nil {
return err
}
```
Returns an error if deletion fails.
### func \(\*VolumeService\) Get
```go
func (v *VolumeService) Get(ctx context.Context, name string) (*types.Volume, error)
```
Get retrieves a volume by its name.
Parameters:
- name: The volume name
Example:
```
volume, err := client.Volumes.Get(ctx, "my-data-volume")
if err != nil {
return err
}
fmt.Printf("Volume state: %s\n", volume.State)
```
Returns the \[types.Volume\] or an error if not found.
### func \(\*VolumeService\) List
```go
func (v *VolumeService) List(ctx context.Context) ([]*types.Volume, error)
```
List returns all volumes in the organization.
Example:
```
volumes, err := client.Volumes.List(ctx)
if err != nil {
return err
}
for _, vol := range volumes {
fmt.Printf("Volume %s: %s\n", vol.Name, vol.State)
}
```
Returns a slice of \[types.Volume\] or an error if the request fails.
### func \(\*VolumeService\) WaitForReady
```go
func (v *VolumeService) WaitForReady(ctx context.Context, volume *types.Volume, timeout time.Duration) (*types.Volume, error)
```
WaitForReady waits for a volume to reach the "ready" state.
This method polls the volume status until it becomes ready, reaches an error state, or the timeout expires. The polling interval is 1 second.
Parameters:
- volume: The volume to wait for
- timeout: Maximum time to wait for the volume to become ready
Example:
```
volume, err := client.Volumes.Create(ctx, "my-volume")
if err != nil {
return err
}
// Wait up to 2 minutes for the volume to be ready
volume, err = client.Volumes.WaitForReady(ctx, volume, 2*time.Minute)
if err != nil {
return fmt.Errorf("volume failed to become ready: %w", err)
}
```
Returns the updated \[types.Volume\] when ready, or an error if the timeout expires or the volume enters an error state.
{/* Code generated by gomarkdoc. DO NOT EDIT */}
# errors
```go
import "github.com/daytonaio/daytona/libs/sdk-go/pkg/errors"
```
## Index
- [func ConvertAPIError\(err error, httpResp \*http.Response\) error](https://www.daytona.io/docs/%3C.md#ConvertAPIError%3E)
- [func ConvertToolboxError\(err error, httpResp \*http.Response\) error](https://www.daytona.io/docs/%3C.md#ConvertToolboxError%3E)
- [type DaytonaError](https://www.daytona.io/docs/%3C.md#DaytonaError%3E)
- [func NewDaytonaError\(message string, statusCode int, headers http.Header\) \*DaytonaError](https://www.daytona.io/docs/%3C.md#NewDaytonaError%3E)
- [func \(e \*DaytonaError\) Error\(\) string](https://www.daytona.io/docs/%3C.md#DaytonaError.Error%3E)
- [type DaytonaNotFoundError](https://www.daytona.io/docs/%3C.md#DaytonaNotFoundError%3E)
- [func NewDaytonaNotFoundError\(message string, headers http.Header\) \*DaytonaNotFoundError](https://www.daytona.io/docs/%3C.md#NewDaytonaNotFoundError%3E)
- [func \(e \*DaytonaNotFoundError\) Error\(\) string](https://www.daytona.io/docs/%3C.md#DaytonaNotFoundError.Error%3E)
- [type DaytonaRateLimitError](https://www.daytona.io/docs/%3C.md#DaytonaRateLimitError%3E)
- [func NewDaytonaRateLimitError\(message string, headers http.Header\) \*DaytonaRateLimitError](https://www.daytona.io/docs/%3C.md#NewDaytonaRateLimitError%3E)
- [func \(e \*DaytonaRateLimitError\) Error\(\) string](https://www.daytona.io/docs/%3C.md#DaytonaRateLimitError.Error%3E)
- [type DaytonaTimeoutError](https://www.daytona.io/docs/%3C.md#DaytonaTimeoutError%3E)
- [func NewDaytonaTimeoutError\(message string\) \*DaytonaTimeoutError](https://www.daytona.io/docs/%3C.md#NewDaytonaTimeoutError%3E)
- [func \(e \*DaytonaTimeoutError\) Error\(\) string](https://www.daytona.io/docs/%3C.md#DaytonaTimeoutError.Error%3E)
## func ConvertAPIError
```go
func ConvertAPIError(err error, httpResp *http.Response) error
```
ConvertAPIError converts api\-client\-go errors to SDK error types
## func ConvertToolboxError
```go
func ConvertToolboxError(err error, httpResp *http.Response) error
```
ConvertToolboxError converts toolbox\-api\-client\-go errors to SDK error types
## type DaytonaError
DaytonaError is the base error type for all Daytona SDK errors
```go
type DaytonaError struct {
Message string
StatusCode int
Headers http.Header
}
```
### func NewDaytonaError
```go
func NewDaytonaError(message string, statusCode int, headers http.Header) *DaytonaError
```
NewDaytonaError creates a new DaytonaError
### func \(\*DaytonaError\) Error
```go
func (e *DaytonaError) Error() string
```
## type DaytonaNotFoundError
DaytonaNotFoundError represents a resource not found error \(404\)
```go
type DaytonaNotFoundError struct {
*DaytonaError
}
```
### func NewDaytonaNotFoundError
```go
func NewDaytonaNotFoundError(message string, headers http.Header) *DaytonaNotFoundError
```
NewDaytonaNotFoundError creates a new DaytonaNotFoundError
### func \(\*DaytonaNotFoundError\) Error
```go
func (e *DaytonaNotFoundError) Error() string
```
## type DaytonaRateLimitError
DaytonaRateLimitError represents a rate limit error \(429\)
```go
type DaytonaRateLimitError struct {
*DaytonaError
}
```
### func NewDaytonaRateLimitError
```go
func NewDaytonaRateLimitError(message string, headers http.Header) *DaytonaRateLimitError
```
NewDaytonaRateLimitError creates a new DaytonaRateLimitError
### func \(\*DaytonaRateLimitError\) Error
```go
func (e *DaytonaRateLimitError) Error() string
```
## type DaytonaTimeoutError
DaytonaTimeoutError represents a timeout error
```go
type DaytonaTimeoutError struct {
*DaytonaError
}
```
### func NewDaytonaTimeoutError
```go
func NewDaytonaTimeoutError(message string) *DaytonaTimeoutError
```
NewDaytonaTimeoutError creates a new DaytonaTimeoutError
### func \(\*DaytonaTimeoutError\) Error
```go
func (e *DaytonaTimeoutError) Error() string
```
The Daytona Go SDK provides a powerful interface for programmatically interacting with Daytona Sandboxes.
## Installation
Install the Daytona Go SDK using go get:
```bash
go get github.com/daytonaio/daytona/libs/sdk-go
```
## Getting Started
### Create a Sandbox
Create a Daytona Sandbox to run your code securely in an isolated environment. The following snippet is an example "Hello World" program that runs securely inside a Daytona Sandbox.
```go
package main
import (
"context"
"fmt"
"log"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona"
)
func main() {
// Initialize the SDK (uses environment variables by default)
client, err := daytona.NewClient()
if err != nil {
log.Fatal(err)
}
// Create a new sandbox
sandbox, err := client.Create(context.Background(), nil)
if err != nil {
log.Fatal(err)
}
// Execute a command
response, err := sandbox.Process.ExecuteCommand(context.Background(), "echo 'Hello, World!'")
if err != nil {
log.Fatal(err)
}
fmt.Println(response.Result)
}
```
## Configuration
The Daytona SDK can be configured using environment variables or by passing options to the constructor:
```go
package main
import (
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/types"
)
func main() {
// Using environment variables (DAYTONA_API_KEY, DAYTONA_API_URL, DAYTONA_TARGET)
client, _ := daytona.NewClient()
// Using explicit configuration
config := &types.DaytonaConfig{
APIKey: "YOUR_API_KEY",
APIUrl: "https://app.daytona.io/api",
Target: "us",
}
client, _ = daytona.NewClientWithConfig(config)
}
```
For more information on configuring the Daytona SDK, see [configuration](https://www.daytona.io/docs/en/configuration.md).
{/* Code generated by gomarkdoc. DO NOT EDIT */}
# options
```go
import "github.com/daytonaio/daytona/libs/sdk-go/pkg/options"
```
Package options provides functional option types for configuring SDK operations.
This package uses the functional options pattern to provide a clean, extensible API for configuring optional parameters. Each option function returns a closure that modifies the corresponding options struct.
### Usage
Options are passed as variadic arguments to SDK methods:
```
err := sandbox.Git.Clone(ctx, url, path,
options.WithBranch("develop"),
options.WithUsername("user"),
options.WithPassword("token"),
)
```
### Generic Apply Function
The [Apply](https://www.daytona.io/docs/%3C.md#Apply%3E) function creates a new options struct and applies all provided option functions to it:
```
opts := options.Apply(
options.WithBranch("main"),
options.WithUsername("user"),
)
// opts.Branch == "main", opts.Username == "user"
```
## Index
- [func Apply\[T any\]\(opts ...func\(\*T\)\) \*T](<#Apply>)
- [func WithAllowEmpty\(allowEmpty bool\) func\(\*GitCommit\)](https://www.daytona.io/docs/%3C.md#WithAllowEmpty%3E)
- [func WithBranch\(branch string\) func\(\*GitClone\)](https://www.daytona.io/docs/%3C.md#WithBranch%3E)
- [func WithCodeRunLanguage\(language types.CodeLanguage\) func\(\*CodeRun\)](https://www.daytona.io/docs/%3C.md#WithCodeRunLanguage%3E)
- [func WithCodeRunParams\(params types.CodeRunParams\) func\(\*CodeRun\)](https://www.daytona.io/docs/%3C.md#WithCodeRunParams%3E)
- [func WithCodeRunTimeout\(timeout time.Duration\) func\(\*CodeRun\)](https://www.daytona.io/docs/%3C.md#WithCodeRunTimeout%3E)
- [func WithCommandEnv\(env map\[string\]string\) func\(\*ExecuteCommand\)](<#WithCommandEnv>)
- [func WithCommitId\(commitID string\) func\(\*GitClone\)](https://www.daytona.io/docs/%3C.md#WithCommitId%3E)
- [func WithCreatePtyEnv\(env map\[string\]string\) func\(\*CreatePty\)](<#WithCreatePtyEnv>)
- [func WithCreatePtySize\(ptySize types.PtySize\) func\(\*CreatePty\)](https://www.daytona.io/docs/%3C.md#WithCreatePtySize%3E)
- [func WithCustomContext\(contextID string\) func\(\*RunCode\)](https://www.daytona.io/docs/%3C.md#WithCustomContext%3E)
- [func WithCwd\(cwd string\) func\(\*ExecuteCommand\)](https://www.daytona.io/docs/%3C.md#WithCwd%3E)
- [func WithEnv\(env map\[string\]string\) func\(\*RunCode\)](<#WithEnv>)
- [func WithExecuteTimeout\(timeout time.Duration\) func\(\*ExecuteCommand\)](https://www.daytona.io/docs/%3C.md#WithExecuteTimeout%3E)
- [func WithExtraIndexURLs\(urls ...string\) func\(\*PipInstall\)](https://www.daytona.io/docs/%3C.md#WithExtraIndexURLs%3E)
- [func WithExtraOptions\(options string\) func\(\*PipInstall\)](https://www.daytona.io/docs/%3C.md#WithExtraOptions%3E)
- [func WithFindLinks\(links ...string\) func\(\*PipInstall\)](https://www.daytona.io/docs/%3C.md#WithFindLinks%3E)
- [func WithForce\(force bool\) func\(\*GitDeleteBranch\)](https://www.daytona.io/docs/%3C.md#WithForce%3E)
- [func WithGroup\(group string\) func\(\*SetFilePermissions\)](https://www.daytona.io/docs/%3C.md#WithGroup%3E)
- [func WithIndexURL\(url string\) func\(\*PipInstall\)](https://www.daytona.io/docs/%3C.md#WithIndexURL%3E)
- [func WithInterpreterTimeout\(timeout time.Duration\) func\(\*RunCode\)](https://www.daytona.io/docs/%3C.md#WithInterpreterTimeout%3E)
- [func WithLogChannel\(logChannel chan string\) func\(\*CreateSandbox\)](https://www.daytona.io/docs/%3C.md#WithLogChannel%3E)
- [func WithMode\(mode string\) func\(\*CreateFolder\)](https://www.daytona.io/docs/%3C.md#WithMode%3E)
- [func WithOwner\(owner string\) func\(\*SetFilePermissions\)](https://www.daytona.io/docs/%3C.md#WithOwner%3E)
- [func WithPassword\(password string\) func\(\*GitClone\)](https://www.daytona.io/docs/%3C.md#WithPassword%3E)
- [func WithPermissionMode\(mode string\) func\(\*SetFilePermissions\)](https://www.daytona.io/docs/%3C.md#WithPermissionMode%3E)
- [func WithPre\(\) func\(\*PipInstall\)](https://www.daytona.io/docs/%3C.md#WithPre%3E)
- [func WithPtyEnv\(env map\[string\]string\) func\(\*PtySession\)](<#WithPtyEnv>)
- [func WithPtySize\(size types.PtySize\) func\(\*PtySession\)](https://www.daytona.io/docs/%3C.md#WithPtySize%3E)
- [func WithPullPassword\(password string\) func\(\*GitPull\)](https://www.daytona.io/docs/%3C.md#WithPullPassword%3E)
- [func WithPullUsername\(username string\) func\(\*GitPull\)](https://www.daytona.io/docs/%3C.md#WithPullUsername%3E)
- [func WithPushPassword\(password string\) func\(\*GitPush\)](https://www.daytona.io/docs/%3C.md#WithPushPassword%3E)
- [func WithPushUsername\(username string\) func\(\*GitPush\)](https://www.daytona.io/docs/%3C.md#WithPushUsername%3E)
- [func WithTimeout\(timeout time.Duration\) func\(\*CreateSandbox\)](https://www.daytona.io/docs/%3C.md#WithTimeout%3E)
- [func WithUsername\(username string\) func\(\*GitClone\)](https://www.daytona.io/docs/%3C.md#WithUsername%3E)
- [func WithWaitForStart\(waitForStart bool\) func\(\*CreateSandbox\)](https://www.daytona.io/docs/%3C.md#WithWaitForStart%3E)
- [type CodeRun](https://www.daytona.io/docs/%3C.md#CodeRun%3E)
- [type CreateFolder](https://www.daytona.io/docs/%3C.md#CreateFolder%3E)
- [type CreatePty](https://www.daytona.io/docs/%3C.md#CreatePty%3E)
- [type CreateSandbox](https://www.daytona.io/docs/%3C.md#CreateSandbox%3E)
- [type ExecuteCommand](https://www.daytona.io/docs/%3C.md#ExecuteCommand%3E)
- [type GitClone](https://www.daytona.io/docs/%3C.md#GitClone%3E)
- [type GitCommit](https://www.daytona.io/docs/%3C.md#GitCommit%3E)
- [type GitDeleteBranch](https://www.daytona.io/docs/%3C.md#GitDeleteBranch%3E)
- [type GitPull](https://www.daytona.io/docs/%3C.md#GitPull%3E)
- [type GitPush](https://www.daytona.io/docs/%3C.md#GitPush%3E)
- [type PipInstall](https://www.daytona.io/docs/%3C.md#PipInstall%3E)
- [type PtySession](https://www.daytona.io/docs/%3C.md#PtySession%3E)
- [type RunCode](https://www.daytona.io/docs/%3C.md#RunCode%3E)
- [type SetFilePermissions](https://www.daytona.io/docs/%3C.md#SetFilePermissions%3E)
## func Apply
```go
func Apply[T any](https://www.daytona.io/docs/opts%20...func(*T.md)) *T
```
Apply creates a new instance of type T and applies all provided option functions.
This generic function enables a consistent pattern for applying functional options across different option types. It allocates a zero\-value instance of T, then applies each option function in order.
Example:
```
opts := options.Apply(
options.WithBranch("main"),
options.WithUsername("user"),
)
```
## func WithAllowEmpty
```go
func WithAllowEmpty(allowEmpty bool) func(*GitCommit)
```
WithAllowEmpty allows creating a commit even when there are no staged changes.
This is useful for triggering CI/CD pipelines or marking points in history without actual code changes.
Example:
```
resp, err := sandbox.Git.Commit(ctx, path, "Trigger rebuild", author, email,
options.WithAllowEmpty(true),
)
```
## func WithBranch
```go
func WithBranch(branch string) func(*GitClone)
```
WithBranch sets the branch to clone instead of the repository's default branch.
Example:
```
err := sandbox.Git.Clone(ctx, url, path, options.WithBranch("develop"))
```
## func WithCodeRunLanguage
```go
func WithCodeRunLanguage(language types.CodeLanguage) func(*CodeRun)
```
## func WithCodeRunParams
```go
func WithCodeRunParams(params types.CodeRunParams) func(*CodeRun)
```
WithCodeRunParams sets the code execution parameters.
Example:
```
result, err := sandbox.Process.CodeRun(ctx, code,
options.WithCodeRunParams(types.CodeRunParams{Language: "python"}),
)
```
## func WithCodeRunTimeout
```go
func WithCodeRunTimeout(timeout time.Duration) func(*CodeRun)
```
## func WithCommandEnv
```go
func WithCommandEnv(env map[string]string) func(*ExecuteCommand)
```
WithCommandEnv sets environment variables for the command.
These variables are added to the command's environment in addition to the sandbox's default environment.
Example:
```
result, err := sandbox.Process.ExecuteCommand(ctx, "echo $MY_VAR",
options.WithCommandEnv(map[string]string{"MY_VAR": "hello"}),
)
```
## func WithCommitId
```go
func WithCommitId(commitID string) func(*GitClone)
```
WithCommitId sets a specific commit SHA to checkout after cloning.
The repository is first cloned, then the specified commit is checked out, resulting in a detached HEAD state.
Example:
```
err := sandbox.Git.Clone(ctx, url, path, options.WithCommitId("abc123def"))
```
## func WithCreatePtyEnv
```go
func WithCreatePtyEnv(env map[string]string) func(*CreatePty)
```
WithCreatePtyEnv sets environment variables for CreatePty.
Example:
```
handle, err := sandbox.Process.CreatePty(ctx, "my-pty",
options.WithCreatePtyEnv(map[string]string{"TERM": "xterm-256color"}),
)
```
## func WithCreatePtySize
```go
func WithCreatePtySize(ptySize types.PtySize) func(*CreatePty)
```
WithCreatePtySize sets the PTY terminal dimensions for CreatePty.
Example:
```
handle, err := sandbox.Process.CreatePty(ctx, "my-pty",
options.WithCreatePtySize(types.PtySize{Rows: 24, Cols: 80}),
)
```
## func WithCustomContext
```go
func WithCustomContext(contextID string) func(*RunCode)
```
WithCustomContext sets the interpreter context ID for code execution.
Using a context allows you to maintain state \(variables, imports, etc.\) across multiple code executions. Create a context with CreateContext first.
Example:
```
ctx, _ := sandbox.CodeInterpreter.CreateContext(ctx, nil)
channels, err := sandbox.CodeInterpreter.RunCode(ctx, "x = 42",
options.WithCustomContext(ctx["id"].(string)),
)
```
## func WithCwd
```go
func WithCwd(cwd string) func(*ExecuteCommand)
```
WithCwd sets the working directory for command execution.
Example:
```
result, err := sandbox.Process.ExecuteCommand(ctx, "ls -la",
options.WithCwd("/home/user/project"),
)
```
## func WithEnv
```go
func WithEnv(env map[string]string) func(*RunCode)
```
WithEnv sets environment variables for code execution.
These variables are available to the code during execution.
Example:
```
channels, err := sandbox.CodeInterpreter.RunCode(ctx, "import os; print(os.environ['API_KEY'])",
options.WithEnv(map[string]string{"API_KEY": "secret"}),
)
```
## func WithExecuteTimeout
```go
func WithExecuteTimeout(timeout time.Duration) func(*ExecuteCommand)
```
WithExecuteTimeout sets the timeout for command execution.
If the command doesn't complete within the timeout, it will be terminated.
Example:
```
result, err := sandbox.Process.ExecuteCommand(ctx, "sleep 60",
options.WithExecuteTimeout(5*time.Second),
)
```
## func WithExtraIndexURLs
```go
func WithExtraIndexURLs(urls ...string) func(*PipInstall)
```
WithExtraIndexURLs adds extra index URLs for pip install.
Extra indexes are checked in addition to the main index URL. Useful for installing packages from both PyPI and a private index.
Example:
```
image := daytona.Base("python:3.11").PipInstall(
[]string{"mypackage"},
options.WithExtraIndexURLs("https://private.example.com/simple/"),
)
```
## func WithExtraOptions
```go
func WithExtraOptions(options string) func(*PipInstall)
```
WithExtraOptions adds extra command\-line options for pip install.
Use this for pip options not covered by other With\* functions.
Example:
```
image := daytona.Base("python:3.11").PipInstall(
[]string{"mypackage"},
options.WithExtraOptions("--no-cache-dir --upgrade"),
)
```
## func WithFindLinks
```go
func WithFindLinks(links ...string) func(*PipInstall)
```
WithFindLinks adds find\-links URLs for pip install.
Find\-links URLs are searched for packages before the package index. Useful for installing packages from local directories or custom URLs.
Example:
```
image := daytona.Base("python:3.11").PipInstall(
[]string{"mypackage"},
options.WithFindLinks("/path/to/wheels", "https://example.com/wheels/"),
)
```
## func WithForce
```go
func WithForce(force bool) func(*GitDeleteBranch)
```
WithForce enables force deletion of a branch even if it's not fully merged.
Use with caution as this can result in lost commits if the branch contains work that hasn't been merged elsewhere.
Example:
```
err := sandbox.Git.DeleteBranch(ctx, path, "feature/abandoned",
options.WithForce(true),
)
```
## func WithGroup
```go
func WithGroup(group string) func(*SetFilePermissions)
```
WithGroup sets the file group.
The group should be a valid group name on the sandbox system.
Example:
```
err := sandbox.FileSystem.SetFilePermissions(ctx, "/home/user/file.txt",
options.WithGroup("users"),
)
```
## func WithIndexURL
```go
func WithIndexURL(url string) func(*PipInstall)
```
WithIndexURL sets the base URL of the Python Package Index.
Replaces the default PyPI \(https://pypi.org/simple\) with a custom index.
Example:
```
image := daytona.Base("python:3.11").PipInstall(
[]string{"mypackage"},
options.WithIndexURL("https://my-pypi.example.com/simple/"),
)
```
## func WithInterpreterTimeout
```go
func WithInterpreterTimeout(timeout time.Duration) func(*RunCode)
```
WithInterpreterTimeout sets the execution timeout for code.
If the code doesn't complete within the timeout, execution is terminated.
Example:
```
channels, err := sandbox.CodeInterpreter.RunCode(ctx, "import time; time.sleep(60)",
options.WithInterpreterTimeout(5*time.Second),
)
```
## func WithLogChannel
```go
func WithLogChannel(logChannel chan string) func(*CreateSandbox)
```
WithLogChannel provides a channel for receiving build logs during sandbox creation.
When creating a sandbox from a custom image that requires building, build logs are streamed to the provided channel. The channel is closed when streaming completes. If no build is required, no logs are sent and the channel remains unused.
Example:
```
logChan := make(chan string)
go func() {
for log := range logChan {
fmt.Println(log)
}
}()
sandbox, err := client.Create(ctx, params,
options.WithLogChannel(logChan),
)
```
## func WithMode
```go
func WithMode(mode string) func(*CreateFolder)
```
WithMode sets the Unix file permissions for the created folder.
The mode should be specified as an octal string \(e.g., "0755", "0700"\). If not specified, defaults to "0755".
Example:
```
err := sandbox.FileSystem.CreateFolder(ctx, "/home/user/mydir",
options.WithMode("0700"),
)
```
## func WithOwner
```go
func WithOwner(owner string) func(*SetFilePermissions)
```
WithOwner sets the file owner.
The owner should be a valid username on the sandbox system.
Example:
```
err := sandbox.FileSystem.SetFilePermissions(ctx, "/home/user/file.txt",
options.WithOwner("root"),
)
```
## func WithPassword
```go
func WithPassword(password string) func(*GitClone)
```
WithPassword sets the password or access token for HTTPS authentication when cloning.
For GitHub, use a Personal Access Token \(PAT\). For GitLab, use a Project Access Token or Personal Access Token. For Bitbucket, use an App Password.
Example:
```
err := sandbox.Git.Clone(ctx, url, path,
options.WithUsername("username"),
options.WithPassword("ghp_xxxxxxxxxxxx"),
)
```
## func WithPermissionMode
```go
func WithPermissionMode(mode string) func(*SetFilePermissions)
```
WithPermissionMode sets the Unix file permissions.
The mode should be specified as an octal string \(e.g., "0644", "0755"\).
Example:
```
err := sandbox.FileSystem.SetFilePermissions(ctx, "/home/user/file.txt",
options.WithPermissionMode("0644"),
)
```
## func WithPre
```go
func WithPre() func(*PipInstall)
```
WithPre enables installation of pre\-release and development versions.
Example:
```
image := daytona.Base("python:3.11").PipInstall(
[]string{"mypackage"},
options.WithPre(),
)
```
## func WithPtyEnv
```go
func WithPtyEnv(env map[string]string) func(*PtySession)
```
WithPtyEnv sets environment variables for the PTY session.
Example:
```
session, err := sandbox.Process.CreatePtySession(ctx, "my-session",
options.WithPtyEnv(map[string]string{"TERM": "xterm-256color"}),
)
```
## func WithPtySize
```go
func WithPtySize(size types.PtySize) func(*PtySession)
```
WithPtySize sets the PTY terminal dimensions.
Example:
```
session, err := sandbox.Process.CreatePtySession(ctx, "my-session",
options.WithPtySize(types.PtySize{Rows: 24, Cols: 80}),
)
```
## func WithPullPassword
```go
func WithPullPassword(password string) func(*GitPull)
```
WithPullPassword sets the password or access token for HTTPS authentication when pulling.
Example:
```
err := sandbox.Git.Pull(ctx, path,
options.WithPullUsername("username"),
options.WithPullPassword("ghp_xxxxxxxxxxxx"),
)
```
## func WithPullUsername
```go
func WithPullUsername(username string) func(*GitPull)
```
WithPullUsername sets the username for HTTPS authentication when pulling.
Example:
```
err := sandbox.Git.Pull(ctx, path,
options.WithPullUsername("username"),
options.WithPullPassword("github_token"),
)
```
## func WithPushPassword
```go
func WithPushPassword(password string) func(*GitPush)
```
WithPushPassword sets the password or access token for HTTPS authentication when pushing.
Example:
```
err := sandbox.Git.Push(ctx, path,
options.WithPushUsername("username"),
options.WithPushPassword("ghp_xxxxxxxxxxxx"),
)
```
## func WithPushUsername
```go
func WithPushUsername(username string) func(*GitPush)
```
WithPushUsername sets the username for HTTPS authentication when pushing.
Example:
```
err := sandbox.Git.Push(ctx, path,
options.WithPushUsername("username"),
options.WithPushPassword("github_token"),
)
```
## func WithTimeout
```go
func WithTimeout(timeout time.Duration) func(*CreateSandbox)
```
WithTimeout sets the maximum duration to wait for sandbox creation to complete.
If the timeout is exceeded before the sandbox is ready, Create returns an error. The default timeout is 60 seconds.
Example:
```
sandbox, err := client.Create(ctx, params,
options.WithTimeout(5*time.Minute),
)
```
## func WithUsername
```go
func WithUsername(username string) func(*GitClone)
```
WithUsername sets the username for HTTPS authentication when cloning.
For GitHub, GitLab, and similar services, the username is typically your account username or a placeholder like "git" when using tokens.
Example:
```
err := sandbox.Git.Clone(ctx, url, path,
options.WithUsername("username"),
options.WithPassword("github_token"),
)
```
## func WithWaitForStart
```go
func WithWaitForStart(waitForStart bool) func(*CreateSandbox)
```
WithWaitForStart controls whether \[daytona.Client.Create\] waits for the sandbox to reach the started state before returning.
When true \(the default\), Create blocks until the sandbox is fully started and ready for use. When false, Create returns immediately after the sandbox is created, which may be in a pending or building state.
Example:
```
// Return immediately without waiting for the sandbox to start
sandbox, err := client.Create(ctx, params,
options.WithWaitForStart(false),
)
```
## type CodeRun
CodeRun holds optional parameters for \[daytona.ProcessService.CodeRun\].
```go
type CodeRun struct {
Params *types.CodeRunParams // Code execution parameters
Timeout *time.Duration // Execution timeout
Language types.CodeLanguage // Override the default language
}
```
## type CreateFolder
CreateFolder holds optional parameters for \[daytona.FileSystemService.CreateFolder\].
```go
type CreateFolder struct {
Mode *string // Unix file permissions (e.g., "0755")
}
```
## type CreatePty
CreatePty holds optional parameters for \[daytona.ProcessService.CreatePty\].
```go
type CreatePty struct {
PtySize *types.PtySize // Terminal dimensions (rows and columns)
Env map[string]string // Environment variables for the PTY session
}
```
## type CreateSandbox
CreateSandbox holds optional parameters for \[daytona.Client.Create\].
```go
type CreateSandbox struct {
Timeout *time.Duration // Maximum time to wait for sandbox creation
WaitForStart bool // Whether to wait for the sandbox to reach started state
LogChannel chan string // Channel for receiving build logs during image builds
}
```
## type ExecuteCommand
ExecuteCommand holds optional parameters for \[daytona.ProcessService.ExecuteCommand\].
```go
type ExecuteCommand struct {
Cwd *string // Working directory for command execution
Env map[string]string // Environment variables
Timeout *time.Duration // Command execution timeout
}
```
## type GitClone
GitClone holds optional parameters for \[daytona.GitService.Clone\].
Fields are pointers to distinguish between unset values and zero values. Use the corresponding With\* functions to set these options.
```go
type GitClone struct {
Branch *string // Branch to clone (defaults to repository's default branch)
CommitId *string // Specific commit SHA to checkout after cloning
Username *string // Username for HTTPS authentication
Password *string // Password or token for HTTPS authentication
}
```
## type GitCommit
GitCommit holds optional parameters for \[daytona.GitService.Commit\].
```go
type GitCommit struct {
AllowEmpty *bool // Allow creating commits with no staged changes
}
```
## type GitDeleteBranch
GitDeleteBranch holds optional parameters for \[daytona.GitService.DeleteBranch\].
```go
type GitDeleteBranch struct {
Force *bool // Force delete even if branch is not fully merged
}
```
## type GitPull
GitPull holds optional parameters for \[daytona.GitService.Pull\].
```go
type GitPull struct {
Username *string // Username for HTTPS authentication
Password *string // Password or token for HTTPS authentication
}
```
## type GitPush
GitPush holds optional parameters for \[daytona.GitService.Push\].
```go
type GitPush struct {
Username *string // Username for HTTPS authentication
Password *string // Password or token for HTTPS authentication
}
```
## type PipInstall
PipInstall holds optional parameters for \[daytona.Image.PipInstall\].
```go
type PipInstall struct {
FindLinks []string // URLs to search for packages
IndexURL string // Base URL of the Python Package Index
ExtraIndexURLs []string // Extra index URLs for package lookup
Pre bool // Allow pre-release and development versions
ExtraOptions string // Additional pip command-line options
}
```
## type PtySession
PtySession holds optional parameters for \[daytona.ProcessService.CreatePtySession\].
```go
type PtySession struct {
PtySize *types.PtySize // Terminal dimensions (rows and columns)
Env map[string]string // Environment variables for the PTY session
}
```
## type RunCode
RunCode holds optional parameters for \[daytona.CodeInterpreterService.RunCode\].
```go
type RunCode struct {
ContextID string // Interpreter context ID for persistent state
Env map[string]string // Environment variables for code execution
Timeout *time.Duration // Execution timeout
}
```
## type SetFilePermissions
SetFilePermissions holds optional parameters for \[daytona.FileSystemService.SetFilePermissions\].
```go
type SetFilePermissions struct {
Mode *string // Unix file permissions (e.g., "0644")
Owner *string // File owner username
Group *string // File group name
}
```
{/* Code generated by gomarkdoc. DO NOT EDIT */}
# types
```go
import "github.com/daytonaio/daytona/libs/sdk-go/pkg/types"
```
## Index
- [Constants](https://www.daytona.io/docs/%3C.md#constants%3E)
- [type Chart](https://www.daytona.io/docs/%3C.md#Chart%3E)
- [type CodeLanguage](https://www.daytona.io/docs/%3C.md#CodeLanguage%3E)
- [type CodeRunParams](https://www.daytona.io/docs/%3C.md#CodeRunParams%3E)
- [type CreateSnapshotParams](https://www.daytona.io/docs/%3C.md#CreateSnapshotParams%3E)
- [type DaytonaConfig](https://www.daytona.io/docs/%3C.md#DaytonaConfig%3E)
- [type ExecuteResponse](https://www.daytona.io/docs/%3C.md#ExecuteResponse%3E)
- [type ExecutionArtifacts](https://www.daytona.io/docs/%3C.md#ExecutionArtifacts%3E)
- [type ExecutionError](https://www.daytona.io/docs/%3C.md#ExecutionError%3E)
- [type ExecutionResult](https://www.daytona.io/docs/%3C.md#ExecutionResult%3E)
- [type ExperimentalConfig](https://www.daytona.io/docs/%3C.md#ExperimentalConfig%3E)
- [type FileDownloadRequest](https://www.daytona.io/docs/%3C.md#FileDownloadRequest%3E)
- [type FileDownloadResponse](https://www.daytona.io/docs/%3C.md#FileDownloadResponse%3E)
- [type FileInfo](https://www.daytona.io/docs/%3C.md#FileInfo%3E)
- [type FileStatus](https://www.daytona.io/docs/%3C.md#FileStatus%3E)
- [type FileUpload](https://www.daytona.io/docs/%3C.md#FileUpload%3E)
- [type GitCommitResponse](https://www.daytona.io/docs/%3C.md#GitCommitResponse%3E)
- [type GitStatus](https://www.daytona.io/docs/%3C.md#GitStatus%3E)
- [type ImageParams](https://www.daytona.io/docs/%3C.md#ImageParams%3E)
- [type LspLanguageID](https://www.daytona.io/docs/%3C.md#LspLanguageID%3E)
- [type OutputMessage](https://www.daytona.io/docs/%3C.md#OutputMessage%3E)
- [type PaginatedSnapshots](https://www.daytona.io/docs/%3C.md#PaginatedSnapshots%3E)
- [type Position](https://www.daytona.io/docs/%3C.md#Position%3E)
- [type PreviewLink](https://www.daytona.io/docs/%3C.md#PreviewLink%3E)
- [type PtyResult](https://www.daytona.io/docs/%3C.md#PtyResult%3E)
- [type PtySessionInfo](https://www.daytona.io/docs/%3C.md#PtySessionInfo%3E)
- [type PtySize](https://www.daytona.io/docs/%3C.md#PtySize%3E)
- [type Resources](https://www.daytona.io/docs/%3C.md#Resources%3E)
- [type SandboxBaseParams](https://www.daytona.io/docs/%3C.md#SandboxBaseParams%3E)
- [type ScreenshotOptions](https://www.daytona.io/docs/%3C.md#ScreenshotOptions%3E)
- [type ScreenshotRegion](https://www.daytona.io/docs/%3C.md#ScreenshotRegion%3E)
- [type ScreenshotResponse](https://www.daytona.io/docs/%3C.md#ScreenshotResponse%3E)
- [type SignedPreviewLink](https://www.daytona.io/docs/%3C.md#SignedPreviewLink%3E)
- [type Snapshot](https://www.daytona.io/docs/%3C.md#Snapshot%3E)
- [type SnapshotParams](https://www.daytona.io/docs/%3C.md#SnapshotParams%3E)
- [type Volume](https://www.daytona.io/docs/%3C.md#Volume%3E)
- [type VolumeMount](https://www.daytona.io/docs/%3C.md#VolumeMount%3E)
## Constants
```go
const CodeToolboxLanguageLabel = "code-toolbox-language"
```
## type Chart
```go
type Chart = toolbox.Chart
```
## type CodeLanguage
CodeLanguage
```go
type CodeLanguage string
```
```go
const (
CodeLanguagePython CodeLanguage = "python"
CodeLanguageJavaScript CodeLanguage = "javascript"
CodeLanguageTypeScript CodeLanguage = "typescript"
)
```
## type CodeRunParams
CodeRunParams represents parameters for code execution
```go
type CodeRunParams struct {
Argv []string
Env map[string]string
}
```
## type CreateSnapshotParams
CreateSnapshotParams represents parameters for creating a snapshot
```go
type CreateSnapshotParams struct {
Name string
Image any // string or *Image
Resources *Resources
Entrypoint []string
SkipValidation *bool
}
```
## type DaytonaConfig
DaytonaConfig represents the configuration for the Daytona client. When a field is nil, the client will fall back to environment variables or defaults.
```go
type DaytonaConfig struct {
APIKey string
JWTToken string
OrganizationID string
APIUrl string
Target string
Experimental *ExperimentalConfig
}
```
## type ExecuteResponse
ExecuteResponse represents a command execution response
```go
type ExecuteResponse struct {
ExitCode int
Result string
Artifacts *ExecutionArtifacts // nil when no artifacts available
}
```
## type ExecutionArtifacts
ExecutionArtifacts represents execution output artifacts
```go
type ExecutionArtifacts struct {
Stdout string
Charts []Chart
}
```
## type ExecutionError
ExecutionError represents a code execution error
```go
type ExecutionError struct {
Name string
Value string
Traceback *string // Optional stack trace; nil when not available
}
```
## type ExecutionResult
ExecutionResult represents code interpreter execution result
```go
type ExecutionResult struct {
Stdout string
Stderr string
Charts []Chart // Optional charts from matplotlib
Error *ExecutionError // nil = success, non-nil = execution failed
}
```
## type ExperimentalConfig
ExperimentalConfig holds experimental feature flags for the Daytona client.
```go
type ExperimentalConfig struct {
OtelEnabled bool // Enable OpenTelemetry tracing and metrics
}
```
## type FileDownloadRequest
FileDownloadRequest
```go
type FileDownloadRequest struct {
Source string
Destination *string // nil = download to memory (return []byte), non-nil = save to file path
}
```
## type FileDownloadResponse
FileDownloadResponse represents a file download response
```go
type FileDownloadResponse struct {
Source string
Result any // []byte or string (path)
Error *string // nil = success, non-nil = error message
}
```
## type FileInfo
FileInfo represents file metadata
```go
type FileInfo struct {
Name string
Size int64
Mode string
ModifiedTime time.Time
IsDirectory bool
}
```
## type FileStatus
FileStatus represents the status of a file in git
```go
type FileStatus struct {
Path string
Status string
}
```
## type FileUpload
FileUpload represents a file to upload
```go
type FileUpload struct {
Source any // []byte or string (path)
Destination string
}
```
## type GitCommitResponse
GitCommitResponse
```go
type GitCommitResponse struct {
SHA string
}
```
## type GitStatus
GitStatus represents git repository status
```go
type GitStatus struct {
CurrentBranch string
Ahead int
Behind int
BranchPublished bool
FileStatus []FileStatus
}
```
## type ImageParams
ImageParams represents parameters for creating a sandbox from an image
```go
type ImageParams struct {
SandboxBaseParams
Image any // string or *Image
Resources *Resources
}
```
## type LspLanguageID
```go
type LspLanguageID string
```
```go
const (
LspLanguagePython LspLanguageID = "python"
LspLanguageJavaScript LspLanguageID = "javascript"
LspLanguageTypeScript LspLanguageID = "typescript"
)
```
## type OutputMessage
OutputMessage represents an output message
```go
type OutputMessage struct {
Type string `json:"type"`
Text string `json:"text"`
Name string `json:"name"`
Value string `json:"value"`
Traceback string `json:"traceback"`
}
```
## type PaginatedSnapshots
PaginatedSnapshots represents a paginated list of snapshots
```go
type PaginatedSnapshots struct {
Items []*Snapshot
Total int
Page int
TotalPages int
}
```
## type Position
Position represents a position in a document
```go
type Position struct {
Line int // zero-based
Character int // zero-based
}
```
## type PreviewLink
PreviewLink contains the URL and authentication token for a sandbox preview.
```go
type PreviewLink struct {
URL string
Token string
}
```
## type PtyResult
PtyResult represents PTY session exit information
```go
type PtyResult struct {
ExitCode *int // nil = process still running, non-nil = exit code
Error *string // nil = success, non-nil = error message
}
```
## type PtySessionInfo
PtySessionInfo represents PTY session information
```go
type PtySessionInfo struct {
ID string
Active bool
CWD string // Current working directory; may be empty unavailable
Cols int
Rows int
ProcessID *int // Process ID; may be nil if unavailable
CreatedAt time.Time
}
```
## type PtySize
PtySize represents terminal dimensions
```go
type PtySize struct {
Rows int
Cols int
}
```
## type Resources
Resources represents resource allocation for a sandbox.
```go
type Resources struct {
CPU int
GPU int
Memory int
Disk int
}
```
## type SandboxBaseParams
SandboxBaseParams contains common parameters for sandbox creation.
```go
type SandboxBaseParams struct {
Name string
User string
Language CodeLanguage
EnvVars map[string]string
Labels map[string]string
Public bool
AutoStopInterval *int // nil = no auto-stop, 0 = immediate stop
AutoArchiveInterval *int // nil = no auto-archive, 0 = immediate archive
AutoDeleteInterval *int // nil = no auto-delete, 0 = immediate delete
Volumes []VolumeMount
NetworkBlockAll bool
NetworkAllowList *string
Ephemeral bool
}
```
## type ScreenshotOptions
```go
type ScreenshotOptions struct {
ShowCursor *bool // nil = default, true = show, false = hide
Format *string // nil = default format (PNG), or "jpeg", "webp", etc.
Quality *int // nil = default quality, 0-100 for JPEG/WebP
Scale *float64 // nil = 1.0, scaling factor for the screenshot
}
```
## type ScreenshotRegion
ScreenshotRegion represents a screenshot region
```go
type ScreenshotRegion struct {
X int
Y int
Width int
Height int
}
```
## type ScreenshotResponse
```go
type ScreenshotResponse struct {
Image string // base64-encoded image data
Width int
Height int
SizeBytes *int // Size in bytes
}
```
## type SignedPreviewLink
SignedPreviewLink contains the signed URL, authentication token, port, and sandbox ID for a sandbox preview.
```go
type SignedPreviewLink struct {
SandboxID string
Port int
Token string
URL string
}
```
## type Snapshot
Snapshot represents a Daytona snapshot
```go
type Snapshot struct {
ID string `json:"id"`
OrganizationID string `json:"organizationId,omitempty"`
General bool `json:"general"`
Name string `json:"name"`
ImageName string `json:"imageName,omitempty"`
State string `json:"state"`
Size *float64 `json:"size,omitempty"`
Entrypoint []string `json:"entrypoint,omitempty"`
CPU int `json:"cpu"`
GPU int `json:"gpu"`
Memory int `json:"mem"` // API uses "mem" not "memory"
Disk int `json:"disk"`
ErrorReason *string `json:"errorReason,omitempty"` // nil = success, non-nil = error reason if snapshot failed
SkipValidation bool `json:"skipValidation"`
CreatedAt time.Time `json:"createdAt"`
UpdatedAt time.Time `json:"updatedAt"`
LastUsedAt *time.Time `json:"lastUsedAt,omitempty"`
}
```
## type SnapshotParams
SnapshotParams represents parameters for creating a sandbox from a snapshot
```go
type SnapshotParams struct {
SandboxBaseParams
Snapshot string
}
```
## type Volume
Volume represents a Daytona volume
```go
type Volume struct {
ID string `json:"id"`
Name string `json:"name"`
OrganizationID string `json:"organizationId"`
State string `json:"state"`
ErrorReason *string `json:"errorReason,omitempty"`
CreatedAt time.Time `json:"createdAt"`
UpdatedAt time.Time `json:"updatedAt"`
LastUsedAt time.Time `json:"lastUsedAt,omitempty"`
}
```
## type VolumeMount
VolumeMount represents a volume mount configuration
```go
type VolumeMount struct {
VolumeID string
MountPath string
Subpath *string // Optional subpath within the volume; nil = mount entire volume
}
```
This guide demonstrates how to use `DaytonaCodeExecutor` for [AG2](https://ag2.ai/) to build a multi-agent system that automatically fixes broken code in a secure sandbox environment. The executor enables agents to run Python, JavaScript, TypeScript, and Bash code within isolated Daytona sandboxes, with no risk to your local machine.
In this example, we build a bug fixer that takes broken code as input, analyzes the bug, proposes a fix, and verifies it by actually executing the code in a Daytona sandbox. If the fix fails, the agent sees the error output and retries with a different approach, continuing until the code passes or the maximum number of attempts is reached.
---
### 1. Workflow Overview
You provide broken code. The `bug_fixer` agent (LLM) analyzes it and proposes a fix wrapped in a fenced code block. The `code_executor` agent extracts the code block and runs it in a Daytona sandbox. If execution fails, the bug fixer sees the full error output and tries again. Once the code passes, the agents terminate and the sandbox is automatically deleted.
The key benefit: every fix attempt is verified by actually running the code — not just reviewed by the LLM.
### 2. Project Setup
#### Clone the Repository
Clone the Daytona repository and navigate to the example directory:
```bash
git clone https://github.com/daytonaio/daytona
cd daytona/guides/python/ag2/bug-fixer-agent/openai
```
#### Install Dependencies
:::note[Python Version Requirement]
This example requires **Python 3.10 or higher**. It is recommended to use a virtual environment (e.g., `venv` or `poetry`) to isolate project dependencies.
:::
Install the required packages for this example:
```bash
pip install "ag2[daytona,openai]" python-dotenv
```
The packages include:
- `ag2[daytona,openai]`: AG2 with the Daytona code executor and OpenAI model support
- `python-dotenv`: Loads environment variables from a `.env` file
#### Configure Environment
Get your API keys and configure your environment:
1. **Daytona API key:** Get it from [Daytona Dashboard](https://app.daytona.io/dashboard/keys)
2. **OpenAI API key:** Get it from [OpenAI Platform](https://platform.openai.com/api-keys)
Create a `.env` file in your project directory:
```bash
DAYTONA_API_KEY=dtn_***
OPENAI_API_KEY=sk-***
```
### 3. Understanding the Core Components
Before diving into the implementation, let's understand the key components:
#### AG2 ConversableAgent
`ConversableAgent` is AG2's general-purpose agent. Each agent can be configured as either an LLM agent (with a model and system prompt) or a non-LLM agent (`llm_config=False`) that responds through registered reply handlers — in our case, code execution via `code_execution_config`. The two agents communicate by passing messages back and forth until a termination condition is met.
#### DaytonaCodeExecutor
`DaytonaCodeExecutor` implements the AG2 `CodeExecutor` protocol. When used as a context manager, it creates a Daytona sandbox on entry and automatically deletes it on exit. It reuses the same sandbox across all code executions within the session, extracting and running fenced code blocks from agent messages. The language is inferred from the code block tag (` ```python `, ` ```javascript `, ` ```typescript `).
### 4. Implementation
#### Step 1: Imports and environment
```python
import os
from autogen import ConversableAgent, LLMConfig
from autogen.coding import DaytonaCodeExecutor
from dotenv import load_dotenv
load_dotenv()
```
#### Step 2: Bug fixer system prompt
The system prompt drives the iterative fix loop. It tells the agent which languages are supported, instructs it to wrap fixes in fenced code blocks, and separates the fix message from the TERMINATE signal so the executor always runs the code before the session ends:
```python
BUG_FIXER_SYSTEM_MESSAGE = """You are an expert bug fixer. You support Python, JavaScript, and TypeScript.
If asked to fix code in any other language, refuse and explain which languages are supported.
When given broken code:
1. Analyze the bug carefully and identify the root cause
2. Write the complete fixed code in a fenced code block using the correct language tag
3. Always include assertions or print statements at the end to verify the fix works
4. If your previous fix didn't work, analyze the error output and try a different approach
5. Once the code runs successfully, reply with just the word TERMINATE — never in the same message as a code block
Always wrap your code in fenced code blocks (```python, ```javascript, or ```typescript). Never explain without providing fixed code.
Never include TERMINATE in a message that contains a code block.
"""
```
:::note[Why separate TERMINATE from the code block?]
AG2 checks `is_termination_msg` on every incoming message. If `bug_fixer` includes `TERMINATE` in the same message as a code block, the conversation ends before `code_executor` has a chance to extract and run the fix. Keeping them in separate messages ensures every proposed fix is actually executed in the sandbox before the session terminates.
The empty content check handles a second termination case: when `bug_fixer` refuses to fix code in an unsupported language, it sends a refusal message with no code block. `code_executor` has nothing to execute and sends back an empty reply. Without the empty check, the conversation would loop until `max_turns` is exhausted — checking for empty content stops it immediately.
:::
#### Step 3: Create the agents
```python
def fix_bug(broken_code: str, error_description: str = "") -> None:
llm_config = LLMConfig(
{
"model": "gpt-4o-mini",
"api_key": os.environ["OPENAI_API_KEY"],
}
)
with DaytonaCodeExecutor(timeout=60) as executor:
bug_fixer = ConversableAgent(
name="bug_fixer",
system_message=BUG_FIXER_SYSTEM_MESSAGE,
llm_config=llm_config,
code_execution_config=False,
is_termination_msg=lambda x: (
"TERMINATE" in (x.get("content") or "") or not (x.get("content") or "").strip()
),
)
code_executor = ConversableAgent(
name="code_executor",
llm_config=False,
code_execution_config={"executor": executor},
)
```
`DaytonaCodeExecutor` is used as a context manager so the sandbox is automatically cleaned up when `fix_bug` returns. `bug_fixer` owns the LLM reasoning; `code_executor` owns sandbox execution and never calls the LLM itself (`llm_config=False`).
The optional `error_description` parameter can be used to pass additional context about the failure — for example, a stack trace, a known symptom, or a hint about the cause. In the examples below we leave it empty, as the agent is capable of identifying and fixing the bugs purely from the assertion output.
#### Step 4: Start the conversation
```python
message = f"Fix this broken code:\n\n\n{broken_code}\n"
if error_description:
message += f"\n\nError: {error_description}"
code_executor.run(
recipient=bug_fixer,
message=message,
max_turns=8,
).process()
```
`code_executor` initiates the chat because it owns the problem — the broken code. `bug_fixer` receives it as its first message, proposes a fix, and waits for execution results.
:::tip[Inspecting the fix run]
Assign the return value of `run()` before calling `process()` to access more details about the session:
```python
response = code_executor.run(recipient=bug_fixer, message=message, max_turns=8)
response.process()
response.messages # full message exchange between agents
response.cost # token usage and cost breakdown per model
response.summary # conversation summary (requires summary_method to be set)
```
:::
### 5. Running the Example
The complete example ships with three broken code snippets, one per language:
**Example 1 — Python: postfix evaluator with swapped operands**
The subtraction and division operators pop two values from the stack but apply them in reverse order, producing wrong results for non-commutative operations.
```python
elif token == '-':
stack.append(b - a) # Bug: reversed — should be a - b
elif token == '/':
stack.append(b // a) # Bug: reversed — should be a // b
```
**Example 2 — JavaScript: wrong concatenation order in run-length encoder**
The character and count are concatenated in the wrong order in two places, producing `"a2b3c2"` instead of `"2a3b2c"`.
```javascript
result += str[i - 1] + count; // Bug: should be count + str[i - 1]
result += str[str.length - 1] + count; // Bug: should be count + str[str.length - 1]
```
**Example 3 — TypeScript: `Math.min` instead of `Math.max` in Kadane's algorithm**
Both calls use `Math.min` instead of `Math.max`, causing the algorithm to track the most negative subarray sum instead of the most positive.
```typescript
currentSum = Math.min(currentSum + nums[i], nums[i]); // Bug: should be Math.max
maxSum = Math.min(maxSum, currentSum); // Bug: should be Math.max
```
Run all examples:
```bash
python main.py
```
#### Expected output
The following shows the full agent conversation for Example 1 (Python postfix evaluator):
````
============================================================
Example 1: Python — Postfix Expression Evaluator Bug
============================================================
code_executor (to bug_fixer):
Fix this broken code:
def eval_postfix(expression):
stack = []
for token in expression.split():
if token.lstrip('-').isdigit():
stack.append(int(token))
else:
b = stack.pop()
a = stack.pop()
if token == '+':
stack.append(a + b)
elif token == '-':
stack.append(b - a)
elif token == '*':
stack.append(a * b)
elif token == '/':
stack.append(b // a)
return stack[0]
assert eval_postfix("3 4 +") == 7
assert eval_postfix("10 3 -") == 7, f"Got {eval_postfix('10 3 -')}"
assert eval_postfix("12 4 /") == 3, f"Got {eval_postfix('12 4 /')}"
assert eval_postfix("2 3 4 * +") == 14
print("All postfix tests passed!")
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
bug_fixer (to code_executor):
```python
def eval_postfix(expression):
stack = []
for token in expression.split():
if token.lstrip('-').isdigit():
stack.append(int(token))
else:
b = stack.pop()
a = stack.pop()
if token == '+':
stack.append(a + b)
elif token == '-':
stack.append(a - b) # Fixed order of operands for subtraction
elif token == '*':
stack.append(a * b)
elif token == '/':
stack.append(a // b) # Fixed order of operands for division
return stack[0]
assert eval_postfix("3 4 +") == 7
assert eval_postfix("10 3 -") == 7, f"Got {eval_postfix('10 3 -')}"
assert eval_postfix("12 4 /") == 3, f"Got {eval_postfix('12 4 /')}"
assert eval_postfix("2 3 4 * +") == 14
print("All postfix tests passed!")
```
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
>>>>>>>> EXECUTING CODE BLOCK (inferred language is python)...
code_executor (to bug_fixer):
exitcode: 0 (execution succeeded)
Code output: All postfix tests passed!
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
bug_fixer (to code_executor):
TERMINATE
````
The agent correctly identified both reversed operand bugs from the assertion failure output alone and resolved them in a single attempt, adding its own `# Fixed order of operands` comments to the corrected lines.
#### How the message loop works
`recipient=bug_fixer` in `run()` is what connects the two agents. AG2 sets up a managed back-and-forth loop between them — after each reply, the message is automatically forwarded to the other agent. The agents have no direct reference to each other outside of that call.
Tracing the session above step by step:
1. `code_executor.run(recipient=bug_fixer, ...)` — AG2 starts the loop and `code_executor` sends the broken code as plain text to `bug_fixer`. Nothing is executed yet.
2. `bug_fixer` (LLM) analyzes the code and replies with the fix wrapped in a ` ```python ` block.
3. AG2 calls `_generate_code_execution_reply_using_executor` on `code_executor` — a reply method registered automatically when `code_execution_config` is set. It scans `bug_fixer`'s last message for fenced code blocks, extracts the block, and calls `DaytonaCodeExecutor.execute_code_blocks()`.
4. Daytona runs the code in the sandbox and returns the exit code and output.
5. AG2 forwards the result (`exitcode: 0 (execution succeeded)\nCode output: All postfix tests passed!`) back to `bug_fixer` as `code_executor`'s reply.
6. `bug_fixer` sees the successful output and replies with `TERMINATE`.
7. AG2 checks `is_termination_msg` on the incoming message — returns `True`, conversation stops, the sandbox is deleted.
Note that the original broken code is never executed — only `bug_fixer`'s proposed fix goes into Daytona.
:::note
`>>>>>>>> USING AUTO REPLY...` is printed by AG2 before each automatic agent reply to indicate no human intervention is taking place.
:::
### 6. Complete Code
````python
import os
from autogen import ConversableAgent, LLMConfig
from autogen.coding import DaytonaCodeExecutor
from dotenv import load_dotenv
load_dotenv()
BUG_FIXER_SYSTEM_MESSAGE = """You are an expert bug fixer. You support Python, JavaScript, and TypeScript.
If asked to fix code in any other language, refuse and explain which languages are supported.
When given broken code:
1. Analyze the bug carefully and identify the root cause
2. Write the complete fixed code in a fenced code block using the correct language tag
3. Always include assertions or print statements at the end to verify the fix works
4. If your previous fix didn't work, analyze the error output and try a different approach
5. Once the code runs successfully, reply with just the word TERMINATE — never in the same message as a code block
Always wrap your code in fenced code blocks (```python, ```javascript, or ```typescript). Never explain without providing fixed code.
Never include TERMINATE in a message that contains a code block.
"""
def fix_bug(broken_code: str, error_description: str = "") -> None:
llm_config = LLMConfig(
{
"model": "gpt-4o-mini",
"api_key": os.environ["OPENAI_API_KEY"],
}
)
with DaytonaCodeExecutor(timeout=60) as executor:
bug_fixer = ConversableAgent(
name="bug_fixer",
system_message=BUG_FIXER_SYSTEM_MESSAGE,
llm_config=llm_config,
code_execution_config=False,
is_termination_msg=lambda x: (
"TERMINATE" in (x.get("content") or "") or not (x.get("content") or "").strip()
),
)
code_executor = ConversableAgent(
name="code_executor",
llm_config=False,
code_execution_config={"executor": executor},
)
message = f"Fix this broken code:\n\n\n{broken_code}\n"
if error_description:
message += f"\n\nError: {error_description}"
code_executor.run(
recipient=bug_fixer,
message=message,
max_turns=8,
).process()
if __name__ == "__main__":
# Example 1: Python — swapped operands in postfix expression evaluator
broken_postfix = """\
def eval_postfix(expression):
stack = []
for token in expression.split():
if token.lstrip('-').isdigit():
stack.append(int(token))
else:
b = stack.pop()
a = stack.pop()
if token == '+':
stack.append(a + b)
elif token == '-':
stack.append(b - a)
elif token == '*':
stack.append(a * b)
elif token == '/':
stack.append(b // a)
return stack[0]
assert eval_postfix("3 4 +") == 7
assert eval_postfix("10 3 -") == 7, f"Got {eval_postfix('10 3 -')}"
assert eval_postfix("12 4 /") == 3, f"Got {eval_postfix('12 4 /')}"
assert eval_postfix("2 3 4 * +") == 14
print("All postfix tests passed!")
"""
print("=" * 60)
print("Example 1: Python — Postfix Expression Evaluator Bug")
print("=" * 60)
fix_bug(broken_postfix, "")
# Example 2: JavaScript — wrong concatenation order in run-length encoder
broken_js = """\
function encode(str) {
if (!str) return '';
let result = '';
let count = 1;
for (let i = 1; i < str.length; i++) {
if (str[i] === str[i - 1]) {
count++;
} else {
result += str[i - 1] + count;
count = 1;
}
}
result += str[str.length - 1] + count;
return result;
}
console.assert(encode("aabbbcc") === "2a3b2c", `Expected "2a3b2c", got "${encode("aabbbcc")}"`);
console.assert(encode("abcd") === "1a1b1c1d", `Expected "1a1b1c1d", got "${encode("abcd")}"`);
console.log("All encoding tests passed!");
"""
print("\n" + "=" * 60)
print("Example 2: JavaScript — Run-Length Encoder Bug")
print("=" * 60)
fix_bug(broken_js, "")
# Example 3: TypeScript — Math.min instead of Math.max in Kadane's algorithm
broken_ts = """\
function maxSubarray(nums: number[]): number {
let maxSum = nums[0];
let currentSum = nums[0];
for (let i = 1; i < nums.length; i++) {
currentSum = Math.min(currentSum + nums[i], nums[i]);
maxSum = Math.min(maxSum, currentSum);
}
return maxSum;
}
console.assert(maxSubarray([-2, 1, -3, 4, -1, 2, 1, -5, 4]) === 6,
`Expected 6, got ${maxSubarray([-2, 1, -3, 4, -1, 2, 1, -5, 4])}`);
console.assert(maxSubarray([1]) === 1,
`Expected 1, got ${maxSubarray([1])}`);
console.assert(maxSubarray([5, 4, -1, 7, 8]) === 23,
`Expected 23, got ${maxSubarray([5, 4, -1, 7, 8])}`);
console.log("All max subarray tests passed!");
"""
print("\n" + "=" * 60)
print("Example 3: TypeScript — Max Subarray Bug")
print("=" * 60)
fix_bug(broken_ts, "")
````
**Key advantages of this approach:**
- **Execution-verified fixes:** Every proposed fix is actually run in a sandbox — the agent only terminates when the code passes, not just when it looks correct
- **Secure execution:** Fix attempts run in isolated Daytona sandboxes, not on your machine
- **Multi-language support:** Python, JavaScript, TypeScript, and Bash — language is inferred automatically from the LLM's fenced code block
- **Iterative refinement:** If a fix fails, the agent sees the full error output and retries automatically
- **Automatic cleanup:** The sandbox is deleted as soon as `fix_bug` returns, regardless of outcome
### 7. API Reference
For the complete API reference of `DaytonaCodeExecutor`, including all configuration options and supported parameters, see the [DaytonaCodeExecutor documentation](https://docs.ag2.ai/latest/docs/api-reference/autogen/coding/DaytonaCodeExecutor/).
import { Image } from 'astro:assets'
import notesAppResult from '../../../../../assets/docs/images/inngest-agentkit-notes-app.gif'
This guide demonstrates how to set up and use a fully autonomous coding agent that performs software development tasks in a [Daytona](https://daytona.io) sandbox environment. The agent is built using [AgentKit](https://agentkit.inngest.com/) and leverages Daytona sandboxes for secure, isolated execution. It can create web apps, run tests, execute scripts, and more; automating multi-step workflows based on user prompts.
---
### 1. Workflow Overview
You provide a natural language prompt describing the software task. The agent reasons about your request, plans the steps, and executes them securely in a Daytona sandbox; handling everything from project setup to live previews.
### 2. Project Setup
#### Clone the Repository
Clone the [repository](https://github.com/daytonaio/daytona) and navigate to the example directory:
```bash
git clone https://github.com/daytonaio/daytona.git
cd daytona/guides/typescript/agentkit-inngest/coding-agent/anthropic
```
#### Configure Environment
Get your API keys:
- **Daytona API key:** [Daytona Dashboard](https://app.daytona.io/dashboard/keys)
- **Anthropic API key:** [Anthropic Console](https://console.anthropic.com/)
Copy `.env.example` to `.env` and add your keys:
```bash
DAYTONA_API_KEY=your_daytona_key
ANTHROPIC_API_KEY=your_anthropic_key
```
#### Local Usage
:::note[Node.js Version]
Node.js 18 or newer is required to run the coding agent locally. Please ensure your environment meets this requirement before proceeding.
:::
Install dependencies:
```bash
npm install
```
Run the agent:
```bash
npm run start
```
#### Docker
Build the Docker image:
```bash
docker buildx build . -t coding-agent
```
Run the container:
```bash
docker run --rm -it coding-agent
```
### 3. Configuration
- **Prompt Setting:** Edit the main prompt in `network.run(...)` in `src/index.ts` to change the agent's task.
- **Debug Logs:** Set `enableDebugLogs=true` for detailed agent flow tracking.
### 4. Example Usage
Try the default prompt in `src/index.ts`:
```typescript
const result = await network.run(
`Create a minimal React app called "Notes" that lets users add, view, and delete notes. Each note should have a title and content. Use Create React App or Vite for setup. Include a simple UI with a form to add notes and a list to display them.`
)
```
**The agent will:**
- Scaffold the app
- Install dependencies
- Start the dev server
- Generate a preview link
Terminal output example:
```
✔️ App is ready!
Preview: https://5173-3a828150-1573-42e3-bf9f-9793a2c2c0c2.proxy.daytona.works
```
The image below shows the result generated in this run:
### 5. Understanding the Agent's Execution Flow
The agent operates by leveraging a large language model (LLM) under the hood and has access to a suite of specialized tools. These tools enable the agent to perform everything needed inside the Daytona sandbox environment; from project setup to app deployment. At each step, the agent reasons about the current goal, decides which tool to call, and orchestrates the workflow to achieve the desired outcome.
Below is a step-by-step breakdown of how the agent builds the demo Notes app showing each iteration, the agent's message, and the tool used:
#### Project Initialization
**Agent message:**
```
I'll create a React app for Notes using Vite, which provides a faster and more modern setup compared to Create React App. I'll break this down into steps:
- First, I'll create the project using Vite
- Install necessary dependencies
- Create the components for the Notes app
- Implement the core functionality
Let's get started:
```
**Tool used:**
`shellTool` was called with command:
```
npm create vite@latest notes -- --template react
```
#### Install Dependencies
**Agent message:**
```
Now, I'll navigate to the project directory and install dependencies:
```
**Tool used:**
`shellTool` was called with command:
```
cd notes && npm install
```
#### Create Components & Update App Structure
**Agent message:**
```
Now, I'll create the necessary components and update the app structure:
1. Update App.jsx
2. Update App.css for basic styling
```
**Tool used:**
`uploadFilesTool` was called to upload:
```
App.jsx (main component logic)
App.css (basic styling)
```
#### Start Development Server
**Agent message:**
```
Now, I'll start the development server:
```
**Tool used:**
`startDevServerTool` was called with command:
```
cd notes && npm run dev
```
#### Check Dev Server Health
**Agent message:**
```
Let's check the dev server health:
```
**Tool used:**
`checkDevServerHealthTool` was called to verify that the development server is running and accessible.
#### App Summary & Features
**Agent message:**
```
Great! I've created a minimal React Notes app with the following features:
- Add notes with a title and content
- View the list of notes
- Delete individual notes
- Basic responsive styling
Key details:
- Used Vite for project setup
- Implemented state management with React hooks (useState)
- Simple form for adding notes
- Notes are stored in local state
- Each note has a unique ID, title, and content
- Delete functionality for individual notes
DEV_SERVER_PORT=5173
TASK_COMPLETED
The app is now running and ready to use. Users can:
1. Enter a note title in the first input
2. Enter note content in the textarea
3. Click "Add Note" to save the note
4. View notes in the list below
5. Delete notes by clicking the "Delete" button next to each note
Would you like me to explain any part of the implementation or make any modifications?
```
Notice `DEV_SERVER_PORT` and `TASK_COMPLETED` fields in the agent output message.
`DEV_SERVER_PORT` value is auto-detected by the agent and used to generate the preview link for your app.
`TASK_COMPLETED` signals that the task is finished which is used inside the agent routing logic.
---
**Key advantages:**
- Secure, isolated execution
- Multi-language support
- Auto-detects dev server, starts it, and generates preview link
- Detailed debug logs for agent actions
import { Image } from 'astro:assets'
import ampSdkCodingAgentDemo from '../../../../../assets/docs/images/amp-sdk-coding-agent.gif'
This guide demonstrates how to run an autonomous coding agent based on [Amp Code](https://ampcode.com/) inside a Daytona sandbox environment. The agent can develop full-stack web apps, write code in any language, install dependencies, and run scripts. It can also start and manage dev servers, and generate preview links for live apps.
---
### 1. Workflow Overview
When you launch the main module, a Daytona sandbox is created and the Amp CLI is installed inside it. The agent uses Amp's [streaming JSON mode](https://ampcode.com/manual#cli-streaming-json) for programmatic control.
You interact with the main program via a command line chat interface. The program sends your prompts to the agent inside the sandbox, which executes them and returns the results:
```
$ npm run start
Creating sandbox...
Installing Amp CLI...
Starting Amp Code...
Thinking...
Got it! I'm ready to help. What would you like to build or work on?
Agent ready. Press Ctrl+C at any time to exit.
User: Create a Kanji flashcard app
Thinking...
> I'll create a Kanji flashcard app with flip animations, progress tracking, and multiple study modes. Here's the preview URL:
https://8000-29baaaf7-767a-4dff-8129-1e6ec2100b3e.daytonaproxy01.net
🔧 create_file /home/daytona/index.html
Successfully created file /home/daytona/index.html
🔧 create_file /home/daytona/start.sh
Running `python3 -m http.server 8000` via session command...
User:
```
The agent can also host web apps and provide you with a preview link using the [Daytona Preview Links](https://www.daytona.io/docs/en/preview.md) feature. When your task involves running or previewing a web application, the agent automatically reasons about this need, hosts the app, and generates a preview link for you to inspect the live result:
You can continue interacting with your agent until you are finished. When you exit the program, the sandbox will be deleted automatically.
### 2. Project Setup
#### Clone the Repository
First, clone the daytona [repository](https://github.com/daytonaio/daytona.git) and navigate to the example directory:
```bash
git clone https://github.com/daytonaio/daytona.git
cd daytona/guides/typescript/amp/amp-sdk
```
#### Configure Environment
Get your API keys:
- **Daytona API key:** [Daytona Dashboard](https://app.daytona.io/dashboard/keys)
- **Amp API key:** [Amp Settings](https://ampcode.com/settings)
:::caution[Amp Paid Credits Required]
Amp's execute mode (`-x`) requires paid credits and cannot use the free tier. [Add credits here](https://ampcode.com/pay) before running this example.
:::
Copy `.env.example` to `.env` and add your keys:
```bash
DAYTONA_API_KEY=your_daytona_key
SANDBOX_AMP_API_KEY=your_amp_key
```
#### Local Usage
:::note[Node.js Version]
Node.js 18 or newer is required to run this example. Please ensure your environment meets this requirement before proceeding.
:::
Install dependencies:
```bash
npm install
```
Run the agent:
```bash
npm run start
```
The agent will start and wait for your prompt.
### 3. Understanding the Script
This example uses Amp's `--stream-json` mode for streaming output and the `-x` (execute) flag for autonomous operation. Commands are sent via a PTY (pseudo-terminal) for real-time streaming.
#### Initialization
On startup, the script:
1. Creates a new [Daytona sandbox](https://www.daytona.io/docs/en/sandboxes.md) with the Amp API key.
2. Installs the Amp CLI globally in the sandbox.
3. Creates a PTY for streaming output from Amp.
4. Sends a Daytona-aware system prompt as the first user message (preview URL pattern + instruction to write server startup command to `/home/daytona/start.sh`).
5. Enters the readline loop to send prompts and receive streamed responses.
6. On Ctrl+C, kills the PTY session, deletes the sandbox, and exits.
#### PTY Communication
The agent uses a pseudo-terminal (PTY) for streaming output from Amp:
```ts
// Create a PTY for streaming output from Amp
this.ptyHandle = await this.sandbox.process.createPty({
id: `amp-pty-${Date.now()}`,
cols: 120,
rows: 30,
onData: (data: Uint8Array) => this.handleData(data),
})
// Wait for PTY connection
await this.ptyHandle.waitForConnection()
```
#### Running Amp Commands
Each prompt is sent as an `amp` command with the `-x` (execute) flag for autonomous operation. The agent uses Amp's thread system to maintain conversation context:
```ts
// Run an amp command via PTY and wait for completion
private async runAmpCommand(args: string[]): Promise {
const command = ['amp', '--dangerously-allow-all', '--stream-json', '-m smart', ...args].join(' ')
// Send command to the PTY
await this.ptyHandle.sendInput(`cd /home/daytona && ${command}\n`)
// Wait for the response to complete (signaled by result message)
await new Promise((resolve) => {
this.onResponseComplete = resolve
})
}
// Process a user prompt
async processPrompt(prompt: string): Promise {
if (this.threadId) {
// Continue existing thread
await this.runAmpCommand(['-x', JSON.stringify(prompt), 'threads', 'continue', this.threadId])
} else {
// Start new thread
await this.runAmpCommand(['-x', JSON.stringify(prompt)])
}
}
```
#### Streaming JSON Messages
Amp outputs JSON lines that can be parsed to track agent activity. The `handleData` method buffers incoming data and processes complete lines:
```ts
// Handle streamed data from PTY
private handleData(data: Uint8Array): void {
// Append new data to the buffer
this.buffer += new TextDecoder().decode(data)
// Split the buffer into complete lines
const lines = this.buffer.split('\n')
// Keep any incomplete line in the buffer for next time
this.buffer = lines.pop() || ''
// Process each complete line
for (const line of lines.filter((l) => l.trim())) {
this.handleJsonLine(line)
}
}
```
Message types from Amp's streaming JSON:
- **system**: Session initialization with `subtype: 'init'` and `session_id` for thread tracking
- **assistant**: AI responses with text content and tool usage blocks
- **user**: Tool results (output from executed tools)
- **result**: Final execution result (success or error) - signals response completion
```ts
private handleJsonLine(line: string): void {
const parsed = JSON.parse(line) as AmpMessage
if (parsed.type === 'system' && parsed.subtype === 'init') {
// Capture thread ID for conversation continuation
const sysMsg = parsed as { session_id?: string }
if (sysMsg.session_id) this.threadId = sysMsg.session_id
} else if (parsed.type === 'assistant') {
// Display text and tool_use blocks
const msg = parsed as AssistantMessage
for (const block of msg.message.content) {
if (block.type === 'text') { /* render text */ }
else if (block.type === 'tool_use') { /* display tool */ }
}
} else if (parsed.type === 'user') {
// Tool results: display output
} else if (parsed.type === 'result') {
// Signal response completion
this.onResponseComplete?.()
}
}
```
#### System Prompt and Main Loop
A Daytona-aware system prompt is sent as the first user message. It instructs the agent to use the preview URL pattern and to write the server start command into `/home/daytona/start.sh` (instead of executing directly in Amp), then provide the preview URL:
```ts
const defaultSystemPrompt = [
'You are running in a Daytona sandbox.',
`When running services on localhost, they will be accessible as: ${previewUrlPattern}`,
'When you need to start a server, DO NOT run it directly.',
'Instead, write only the server start command to /home/daytona/start.sh (one command, no markdown).',
'After writing the start command, provide the preview URL to the user.',
].join(' ')
const ampSession = new AmpSession(sandbox)
await ampSession.initialize({ systemPrompt: defaultSystemPrompt })
```
When Amp is ready, the script runs a readline loop:
```ts
const rl = readline.createInterface({ input: process.stdin, output: process.stdout })
while (true) {
const prompt = await new Promise((resolve) => rl.question('User: ', resolve))
if (prompt.trim()) {
await ampSession.processPrompt(prompt)
await startServerFromScript()
}
}
```
The readline loop waits for user input, sends it to the agent, and displays the streamed response. If Amp produced `/home/daytona/start.sh`, the script is then launched via Daytona's session command API so long-running/background server startup does not hang Amp turns.
**Key advantages:**
- Secure, isolated execution in Daytona sandboxes
- Streaming JSON output for real-time tool activity feedback
- PTY-based communication for streaming output
- Thread-based conversation continuity across prompts
- Uses Amp's `smart` mode for state-of-the-art model capabilities
- All agent code execution happens inside the sandbox
- Automatic preview link generation for deployed services
- Automatic cleanup on exit
This guide demonstrates how to run a two-agent autonomous coding system using the [Claude Agent SDK](https://platform.claude.com/docs/en/agent-sdk/overview) and Daytona sandboxes. The system consists of a **Project Manager Agent** (local) and a **Developer Agent** (in-sandbox), enabling advanced delegation, planning, and secure code execution.
The Project Manager Agent runs locally and uses the basic Anthropic interface with the `claude-sonnet-4-20250514` model for high-level planning and task delegation. The Developer Agent runs inside the Daytona sandbox and is created using the Claude Agent SDK, which leverages Claude Code for advanced coding and automation capabilities. This architecture separates high-level planning from low-level code execution for more robust automation.
A key advantage of this approach is its **extensibility**: you can easily replace the Project Manager Agent with your own custom orchestrator logic, or even another agent, making the system highly reusable and adaptable to a wide range of advanced automation and coordination use cases.
---
### 1. Workflow Overview
When the main module is launched, a Daytona sandbox is created for the Developer Agent, and a Project Manager Agent is initialized locally. Interaction with the system occurs via a command line chat interface. The Project Manager Agent receives prompts, plans the workflow, and delegates coding tasks to the Developer Agent. The Developer Agent executes tasks in the sandbox and streams results back to the Project Manager, who reviews and coordinates further actions. All logs and outputs from both agents are streamed in real time to the terminal, providing full visibility into the process as it is managed by the Project Manager Agent.
The Developer Agent can also host web apps and provide preview links using [Daytona Preview Links](https://www.daytona.io/docs/en/preview-and-authentication.md). The Project Manager Agent will present these links and summarize the results for you.
You can continue interacting with the system until you are finished. When you exit the program, the sandbox is deleted automatically.
---
### 2. Project Setup
#### Clone the Repository
First, clone the daytona [repository](https://github.com/daytonaio/daytona.git) and navigate to the example directory:
```bash
git clone https://github.com/daytonaio/daytona.git
cd daytona/guides/typescript/anthropic/multi-agent-claude-sdk
```
#### Configure Environment
To run this example, you need to set the following environment variables:
- `DAYTONA_API_KEY`: Required for access to Daytona sandboxes. Get it from [Daytona Dashboard](https://app.daytona.io/dashboard/keys)
- `ANTHROPIC_API_KEY`: Required for the **Project Manager Agent** (runs locally). Get it from [Claude Developer Platform](https://console.anthropic.com/settings/keys)
- `SANDBOX_ANTHROPIC_API_KEY`: **Optional** for the **Developer Agent** (runs in sandbox). If not provided, defaults to using `ANTHROPIC_API_KEY`. Get it from [Claude Developer Platform](https://console.anthropic.com/settings/keys)
Copy `.env.example` to `.env` and add your keys:
```bash
DAYTONA_API_KEY=your_daytona_key
ANTHROPIC_API_KEY=your_anthropic_key
SANDBOX_ANTHROPIC_API_KEY=your_anthropic_key
```
:::tip[Agent API Key Options]
You can use a single `ANTHROPIC_API_KEY` for both agents, or provide a separate `SANDBOX_ANTHROPIC_API_KEY` for billing/tracking purposes.
:::
:::caution[API Key Security]
The `SANDBOX_ANTHROPIC_API_KEY` is passed into the Daytona sandbox environment and is accessible to any code executed inside the sandbox.
:::
#### Local Usage
:::note[Node.js Version]
Node.js 18 or newer is required to run this example. Please ensure your environment meets this requirement before proceeding.
:::
Install dependencies:
```bash
npm install
```
Run the two-agent system:
```bash
npm run start
```
The system will start and wait for your prompt.
---
### 3. Example Usage
Example chat session:
```
$ npm run start
Creating Developer Agent sandbox...
Installing Developer Agent SDK...
Initializing Developer Agent...
Initializing Project Manager Agent...
=== Two-Agent System Ready ===
Project Manager Agent: Manages tasks and delegates to Developer (Green text)
Developer Agent: Executes code in Daytona sandbox (White text)
Press Ctrl+C at any time to exit.
User: make a lunar lander web app
[Project Manager] Processing your request...
[Project Manager]: I'll help you create a lunar lander web app! Let me delegate this task to the Developer Agent.
Create a lunar lander web game with the following features:
- HTML/CSS/JavaScript based
- Canvas graphics for the game
- Physics simulation (gravity, thrust, velocity)
- Keyboard controls (arrow keys)
- Landing detection (safe landing vs crash)
- Fuel management system
- Start the game on port 80 and provide the preview URL
[Delegating to Developer Agent]...
I'll help you create a lunar lander web app! Let me build a complete game with physics simulation, graphics, and controls.
🔨 Write
🔨 Write
🔨 Bash
Perfect! The web server is now running.
Your Lunar Lander game is live at:
🌐 https://80-17ac1c0f-d684-4122-93b5-8f52fd5393f8.proxy.daytona.works
[Project Manager]: Excellent! The Developer Agent has successfully created your lunar lander web app. The game is now running and accessible at the preview URL above. The implementation includes:
✓ Physics simulation with gravity and thrust
✓ Canvas-based graphics
✓ Keyboard controls for gameplay
✓ Landing detection system
✓ Fuel management
✓ Web server running on port 80
TASK_COMPLETE
[Project Manager] All tasks completed!
```
---
### 4. Understanding the Agent Architecture
This system is composed of two collaborating agents, each with a distinct role and implementation. Below is a step-by-step guide to how each agent operates and interacts within the workflow.
#### Project Manager Agent (Terminal Orchestration)
1. **User Interaction:**
- All user interaction occurs via the terminal with the Project Manager Agent.
- The Project Manager Agent is configured with a system prompt that defines its role and maintains the full conversation history.
2. **Awareness of Developer Agent:**
- The Project Manager Agent knows that a Developer Agent is available inside a Daytona sandbox and can be invoked as needed.
3. **Task Delegation:**
- When the Project Manager Agent determines that a coding task should be delegated, it encapsulates the task within `` tags in its response.
- The system parses these tags and, when present, invokes the Developer Agent with the specified task.
4. **Iterative Workflow:**
- This process can repeat multiple times, with the Project Manager Agent reasoning about progress and delegating further tasks as needed.
5. **Session Completion:**
- When the Project Manager Agent determines the overall task is complete, it outputs `TASK_COMPLETE`, which signals the system to terminate the session.
#### Developer Agent (Sandbox Execution)
1. **Provisioning:**
- The Developer Agent is provisioned inside a Daytona sandbox and is responsible for executing coding tasks.
2. **SDK Installation:**
- The system installs the Claude Agent SDK in the sandbox by running `pip install` (see [process execution](https://www.daytona.io/docs/en/process-code-execution.md#process-execution)).
3. **Interpreter Context:**
- A new [code interpreter context](https://www.daytona.io/docs/en/process-code-execution.md#stateful-code-interpreter) is created for isolated execution.
4. **Script Upload:**
- The coding agent script is uploaded to the sandbox using [file uploading](https://www.daytona.io/docs/file-system-operations.md#uploading-a-single-file).
5. **SDK Initialization:**
- The Claude Agent SDK is initialized in the interpreter context (e.g., `import coding_agent`).
6. **Task Execution:**
- When a `` is received, the system sends the task to the Developer Agent by running a Python command in the interpreter context:
```typescript
const result = await sandbox.codeInterpreter.runCode(
`coding_agent.run_query_sync(os.environ.get('PROMPT', ''))`,
{
context: ctx,
envs: { PROMPT: task },
onStdout,
onStderr,
}
);
```
- The Developer Agent executes the task, streams output, and returns results to the Project Manager Agent for review and further coordination.
---
### 5. Customization
You can customize the Project Manager Agent's behavior by modifying the system prompt in `src/index.ts`. The current implementation:
- Uses `` tags for delegation
- Automatically reviews Developer Agent outputs
- Says "TASK_COMPLETE" when finished
---
### 6. Cleanup
When you exit the main program, the Daytona sandbox and all files are automatically deleted.
---
**Key advantages:**
- Secure, isolated execution in Daytona sandboxes
- Hierarchical agent architecture for robust automation
- Extensible and reusable architecture
- Automatic dev server detection and live preview links
- Multi-language and full-stack support
- Simple setup and automatic cleanup
import { Image } from 'astro:assets'
import claudeAgentSDKInteractiveTerminalSandboxResult from '../../../../../assets/docs/images/claude-agent-sdk-interactive-terminal-sandbox-result.gif'
This guide demonstrates how to run an autonomous coding agent based on [Claude Code](https://code.claude.com/docs/en/overview) inside a Daytona sandbox environment. The agent uses the [Claude Agent SDK](https://platform.claude.com/docs/en/agent-sdk/overview) to follow user prompts.
The agent can develop full-stack web apps, write code in any language, install dependencies, and run scripts. It can also start and manage dev servers, and generate preview links for live apps.
---
### 1. Workflow Overview
When you launch the main module, a Daytona sandbox is created and a Python agent is initialized inside it. The agent is based on the [Claude Agent SDK](https://platform.claude.com/docs/en/agent-sdk/overview).
You interact with the main program via a command line chat interface. The program sends your prompts to the agent inside the sandbox, which executes them and returns the results:
```
$ npm run start 174s
Creating sandbox...
Installing Agent SDK...
Initializing Agent SDK...
Press Ctrl+C at any time to exit.
User: Build a Zelda-like game where I can move around the screen and talk to famous programmers
Thinking...
I'll build a Zelda-like game for you! This will be a fun project with player movement and NPC interactions with famous programmers.
🔨 Write
🔨 Write
Now let me start a simple HTTP server to host the game:
🔨 Bash
Perfect! I've created a Zelda-like game called "Programmer's Quest" for you! 🎮
## Game Features:
✨ Zelda-style gameplay:
- Top-down 2D view with classic retro aesthetics
- Player character with sword and shield
- Grid-based movement system
- Environmental obstacles (trees and rocks)
👥 Famous Programmers as NPCs:
1. Linus Torvalds - Creator of Linux
2. Grace Hopper - COBOL pioneer and Admiral
3. Alan Turing - Father of computer science
4. Ada Lovelace - First computer programmer
5. Dennis Ritchie - Creator of C and UNIX
🎮 Controls:
- Arrow Keys or WASD - Move your character
- SPACE - Talk to NPCs when you're near them
🌟 Gameplay:
- Explore the grassy map and find all 5 legendary programmers
- Each NPC has multiple quotes that cycle when you talk to them
- NPCs glow when you're near them
- Dialog boxes appear with their famous quotes
- Track your progress in the HUD
## Play Now:
🎯 [Click here to play the game!](https://80-8e2c4d23-212a-4f1e-bb6c-abfa71aeed3a.proxy.daytona.works)
The game features smooth movement, collision detection with trees and rocks, and an immersive dialog system. Try to find and talk to all 5 famous programmers to learn their wisdom! Each has 3 different quotes that cycle as you keep talking to them.
Enjoy your adventure! 🗡️✨
User:
```
The agent can also host web apps and provide you with a preview link using the [Daytona Preview Links](https://www.daytona.io/docs/en/preview-and-authentication.md) feature. When your task involves running or previewing a web application, the agent automatically reasons about this need, hosts the app, and generates a preview link for you to inspect the live result:
You can continue interacting with your agent until you are finished. When you exit the program, the sandbox will be deleted automatically.
### 2. Project Setup
#### Clone the Repository
First, clone the daytona [repository](https://github.com/daytonaio/daytona.git) and navigate to the example directory:
```bash
git clone https://github.com/daytonaio/daytona.git
cd daytona/guides/typescript/anthropic/single-claude-agent-sdk
```
#### Configure Environment
Get your API keys:
- **Daytona API key:** [Daytona Dashboard](https://app.daytona.io/dashboard/keys)
- **Anthropic API key:** [Anthropic Console](https://console.anthropic.com/)
Copy `.env.example` to `.env` and add your keys:
```bash
DAYTONA_API_KEY=your_daytona_key
SANDBOX_ANTHROPIC_API_KEY=your_anthropic_key
```
:::caution[API Key Security]
Note: The `SANDBOX_ANTHROPIC_API_KEY` key is passed into the Daytona sandbox environment and is accessible to any code executed inside the sandbox.
:::
#### Local Usage
:::note[Node.js Version]
Node.js 18 or newer is required to run this example. Please ensure your environment meets this requirement before proceeding.
:::
Install dependencies:
```bash
npm install
```
Run the agent:
```bash
npm run start
```
The agent will start and wait for your prompt.
### 3. Example Usage
Example chat session:
```
$ npm run start
Creating sandbox...
Installing Agent SDK...
Initializing Agent SDK...
Press Ctrl+C at any time to exit.
User: Create a fun pet store landing page in Svelte
Thinking...
I'll create a fun pet store landing page in Svelte for you! Let me set up the project and build an engaging landing page with animations and a colorful design.
🔨 Bash
🔨 Bash
Now let me create a fun and engaging pet store landing page with animations, colorful sections, and interactive elements:
🔨 Read
> 🔨 Write
Now let me create the component files:
🔨 Write
🔨 Write
🔨 Write
🔨 Write
🔨 Write
Now let's update the main CSS file and start the development server:
🔨 Read
🔨 Write
Now let's start the development server:
🔨 Bash
🔨 TaskOutput
Perfect! 🎉 I've created a fun and colorful pet store landing page in Svelte! Here's what I built:
## Features:
✨ Hero Section
- Beautiful gradient background with animated floating cards
- Eye-catching navigation bar
- Call-to-action buttons with hover effects
- Bouncing emoji animations
🎯 Features Section
- 4 key benefits (Free Delivery, Premium Quality, Special Offers, Award Winning)
- Interactive cards with hover animations
- Icons that rotate and scale on hover
🛍️ Products Section
- 6 featured products with colorful backgrounds
- Star ratings
- Add to cart functionality
- Smooth hover animations
📧 Newsletter Section
- Vibrant gradient background with floating pet emojis
- Email subscription form
- Success message animation
- Special 20% off offer
👣 Footer
- Comprehensive navigation links
- Social media icons
- Dark gradient design
## Live Preview:
Your pet store is now running at: https://80-b3739199-d90e-4bde-ab18-450d74c950e8.proxy.daytona.works
The page includes:
- Smooth scroll behavior
- Responsive design for mobile devices
- Fun animations throughout
- Interactive elements with hover effects
- Colorful gradients and modern styling
- Emoji-based icons for a playful feel
Click the link to see your fun pet store landing page in action! 🐾
User:
```
### 4. Understanding the Agent's Architecture
This example consists of two main components:
- **Main Program:** The main program is a Node.js script (`index.ts`) that runs on your local machine. It uses the Daytona SDK to create and manage a Daytona sandbox. The main program provides a command line interface for interacting with the agent inside the sandbox.
- **Sandbox Agent:** The sandbox agent is a Python script (`coding_agent.py`) that runs inside the Daytona sandbox. It uses the Claude Agent SDK to create a customized coding agent similar to Claude Code.
#### Initialization
On initialization, the main program:
1. Creates a new [Daytona sandbox](https://www.daytona.io/docs/en/sandboxes.md) with your Anthropic API key included in the environment variables.
2. Installs the Claude Agent SDK by running `pip install` in the sandbox with [process execution](https://www.daytona.io/docs/en/process-code-execution.md#process-execution).
3. Creates a new [code interpreter context](https://www.daytona.io/docs/en/process-code-execution.md#stateful-code-interpreter).
4. Uploads the coding agent script to the sandbox with [file uploading](https://www.daytona.io/docs/file-system-operations.md#uploading-a-single-file).
5. Initializes the Claude Agent SDK by running `import coding_agent` in the code interpreter context.
6. Waits for user input and sends prompts to the agent in the code interpreter context as shown below.
#### Main Program Code
Once the agent is running, the program creates a readline interface to read user input and sends it to the agent.
Each user request is passed to the agent by running a Python command in the code interpreter context:
```typescript
const result = await sandbox.codeInterpreter.runCode(
`coding_agent.run_query_sync(os.environ.get('PROMPT', ''))`,
{
context: ctx,
envs: { PROMPT: prompt },
onStdout,
onStderr,
}
)
```
The `onStdout` and `onStderr` callbacks are used to pass the agent's output back to the main program. After the agent finishes responding to the prompt, the main program waits for the next user input.
#### Sandbox Agent Code
The sandbox agent uses the [Claude Agent SDK](https://platform.claude.com/docs/en/agent-sdk/overview) to create a customized coding agent based on Claude Code.
The agent is initialized with a system prompt that includes the workspace directory and an example of the [preview URL format](https://www.daytona.io/docs/en/preview-and-authentication.md):
```python
system_prompt = """
You are running in a Daytona sandbox.
Use the /home/daytona directory instead of /workspace for file operations.
Your public preview URL for port 80 is: {}.
""".format(preview_url)
```
It also specifies the [tools and permission mode](https://platform.claude.com/docs/en/agent-sdk/quickstart#key-concepts) of the agent:
```python
client = ClaudeSDKClient(
options=ClaudeAgentOptions(
allowed_tools=["Read", "Edit", "Glob", "Grep", "Bash"],
permission_mode="acceptEdits",
system_prompt=system_prompt
)
)
```
The code to run queries and receive responses follows the examples in Anthropic's [Claude Agent Python SDK documentation](https://platform.claude.com/docs/en/agent-sdk/python).
#### Clean up
When you exit the main program, the Daytona sandbox and all files are automatically deleted.
**Key advantages:**
- Secure, isolated execution in Daytona sandboxes
- Communicate with the agent directly in your terminal
- Automatic dev server detection and live preview links
- Multi-language and full-stack support
- Simple setup and automatic cleanup
This guide walks you through running Claude Code inside a Daytona sandbox using the Daytona CLI.
### Prerequisites
- Daytona account and API key (Get it from [Daytona Dashboard](https://app.daytona.io/dashboard/keys))
- Local terminal (macOS, Linux, or Windows)
### Install the Daytona CLI
```bash
brew install daytonaio/cli/daytona
```
```bash
powershell -Command "irm https://get.daytona.io/windows | iex"
```
:::note
Already have the CLI? Check your version with `daytona --version`. If it's below **0.135.0**, [upgrade to the latest version](https://www.daytona.io/docs/en/getting-started.md#cli).
:::
### Authenticate with Daytona
Log in to your Daytona account using your API key:
```bash
daytona login --api-key=YOUR_API_KEY
```
Replace `YOUR_API_KEY` with your actual Daytona API key.
### Create a Sandbox
Create a new sandbox for running Claude Code:
```bash
daytona sandbox create --name claude-sandbox
```
This creates a sandbox named `claude-sandbox`, visible in your [Dashboard](https://app.daytona.io/dashboard/sandboxes). The default Daytona snapshot includes Claude Code, so the command above is all you need.
:::tip
Need more power? Pass `--snapshot daytona-large` or `--snapshot daytona-medium` flag to increase your sandbox resources. See [default snapshots](https://www.daytona.io/docs/en/snapshots.md#default-snapshots) for resource details.
:::
### Connect to the Sandbox
SSH into your sandbox:
```bash
daytona ssh claude-sandbox
```
You now have an interactive terminal session inside the sandbox.
### Run Claude Code
Inside the SSH session, start Claude Code:
```bash
claude
```
On first run, Claude Code will prompt you to authenticate:
1. Copy the authentication URL displayed in the terminal
2. Open the URL in your local browser
3. Complete the authentication flow
4. Copy the code provided by the browser
5. Paste the code back into the terminal
Once authenticated, you're all set. Claude Code runs inside the sandbox while you control it from your terminal.
Claude Code allows you to automate and orchestrate tasks using natural language and code. With Daytona, you can easily run Claude Code inside isolated sandboxes, making it simple to experiment and execute tasks securely.
## Running Claude Code in a Daytona Sandbox
You can run Claude Code and execute tasks with it directly inside a Daytona sandbox. The following examples show how to set up a sandbox, install Claude Code, run tasks programmatically, and stream logs in real time.
> **Note:** While both sync and async modes support streaming PTY output, `AsyncDaytona` is recommended as it provides automatic background callbacks via `on_data`. The synchronous API requires blocking iteration or manual threading to handle output.
```python
import os
import asyncio
from daytona import AsyncDaytona
async def run_claude_code():
async with AsyncDaytona() as daytona:
sandbox = await daytona.create()
# Define the Claude Code command to be executed
claude_command = "claude --dangerously-skip-permissions -p 'write a dad joke about penguins' --output-format stream-json --verbose"
# Install Claude Code in the sandbox
await sandbox.process.exec("npm install -g @anthropic-ai/claude-code")
pty_handle = await sandbox.process.create_pty_session(
id="claude", on_data=lambda data: print(data.decode(), end="")
)
await pty_handle.wait_for_connection()
# Run the Claude Code command inside the sandbox
await pty_handle.send_input(
f"ANTHROPIC_API_KEY={os.environ['ANTHROPIC_API_KEY']} {claude_command}\n"
)
# Use this to close the terminal session if no more commands will be executed
# await pty_handle.send_input("exit\n")
await pty_handle.wait()
# If you are done and have closed the PTY terminal, it is recommended to clean up resources by deleting the sandbox
# await sandbox.delete()
if __name__ == "__main__":
asyncio.run(run_claude_code())
````
```typescript
import { Daytona } from "@daytona/sdk";
const daytona = new Daytona();
try {
const sandbox = await daytona.create();
// Define the Claude Code command to be executed
const claudeCommand =
"claude --dangerously-skip-permissions -p 'write a dad joke about penguins' --output-format stream-json --verbose";
// Install Claude Code in the sandbox
await sandbox.process.executeCommand("npm install -g @anthropic-ai/claude-code");
const ptyHandle = await sandbox.process.createPty({
id: "claude",
onData: (data) => {
process.stdout.write(data);
},
});
await ptyHandle.waitForConnection();
// Run the Claude Code command inside the sandbox
ptyHandle.sendInput(
`ANTHROPIC_API_KEY=${process.env.ANTHROPIC_API_KEY} ${claudeCommand}\n`
);
// Use this to close the terminal session if no more commands will be executed
// ptyHandle.sendInput("exit\n")
await ptyHandle.wait();
// If you are done and have closed the PTY terminal, it is recommended to clean up resources by deleting the sandbox
// await sandbox.delete();
} catch (error) {
console.error("Failed to run Claude Code in Daytona sandbox:", error);
}
````
Guides for integrating Claude with Daytona.
import { Image } from 'astro:assets'
import codexSdkLunarLanderResult from '../../../../../assets/docs/images/codex-sdk-lunar-lander-result.gif'
This guide demonstrates how to run an autonomous coding agent based on [OpenAI Codex](https://chatgpt.com/features/codex) inside a Daytona sandbox environment. The agent can develop full-stack web apps, write code in any language, install dependencies, and run scripts. It can also start and manage dev servers, and generate preview links for live apps.
---
### 1. Workflow Overview
When you launch the main module, a Daytona sandbox is created and a Node.js agent is initialized inside it. The agent is based on the [Codex SDK](https://developers.openai.com/codex/sdk/).
You interact with the main program via a command line chat interface. The program sends your prompts to the agent inside the sandbox, which executes them and returns the results:
```
$ npm run start
Creating sandbox...
Installing Codex agent in sandbox...
Press Ctrl+C at any time to exit.
User: create a 3d animated web-based, lunar lander game
Thinking...
🔨 ✓ Run: /bin/sh -lc ls
🔨 ✓ Run: /bin/sh -lc 'ls -a'
🔨 ✓ Run: /bin/sh -lc 'ls .daytona'
🔨 ✓ Run: /bin/sh -lc 'find /home/daytona -maxdepth 4 -name .git'
📝 Add /home/daytona/index.html
📝 Add /home/daytona/style.css
📝 Add /home/daytona/main.js
📝 Update /home/daytona/main.js
- Built a self-contained 3D lunar lander experience with HUD in index.html wired to main.js.
- Styled a glassy mission card, typography, and neon accents in style.css.
- Implemented the Three.js scene in main.js: starfield + noisy terrain with a flattened pad, modeled lander, thrust/fuel/rotation controls, gravity/drag physics, landing/crash checks, exhaust particles, and a chase camera. Controls: Space/↑ thrust, ←/→ yaw, W/S pitch, R restart.
Next steps:
1) Serve locally (e.g., cd /home/daytona && python3 -m http.server 8080) and open https://8080-e7c5deb5-7723-4bb8-93c6-25258d9b7c53.proxy.daytona.works.
2) Tune physics constants or terrain size if you want a harder/easier landing.
🗒️ To-do list:
- [x] Inspect workspace and set up project structure for web-based lunar lander game
- [x] Implement 3D scene, lunar lander controls, physics, and game loop
- [x] Add UI elements, polish, and quick sanity check (open file if feasible)
Usage Summary: Cached: 71936, Input: 103238, Output: 11311
User: start the server
Thinking...
🔨 ✓ Run: /bin/sh -lc 'cd /home/daytona && nohup python3 -m http.server 8080 --bind 0.0.0.0 >/home/daytona/server.log 2>&1 & echo $!'
Server started on port 8080 (pid 274). Open the game at:
https://8080-e7c5deb5-7723-4bb8-93c6-25258d9b7c53.proxy.daytona.works
If you need to stop it later: kill 274.
Usage Summary: Cached: 4096, Input: 22231, Output: 272
User:
Cleaning up...
```
The agent can also host web apps and provide you with a preview link using the [Daytona Preview Links](https://www.daytona.io/docs/en/preview-and-authentication.md) feature. When your task involves running or previewing a web application, the agent automatically reasons about this need, hosts the app, and generates a preview link for you to inspect the live result:
You can continue interacting with your agent until you are finished. When you exit the program, the sandbox will be deleted automatically.
### 2. Project Setup
#### Clone the Repository
First, clone the daytona [repository](https://github.com/daytonaio/daytona.git) and navigate to the example directory:
```bash
git clone https://github.com/daytonaio/daytona.git
cd daytona/guides/typescript/openai/codex-sdk
```
#### Configure Environment
Get your API keys:
- **Daytona API key:** [Daytona Dashboard](https://app.daytona.io/dashboard/keys)
- **OpenAI API key:** [OpenAI Developer Platform](https://platform.openai.com/api-keys)
Copy `.env.example` to `.env` and add your keys:
```bash
DAYTONA_API_KEY=your_daytona_key
SANDBOX_OPENAI_API_KEY=your_openai_key
```
:::caution[API Key Security]
Note: The `SANDBOX_OPENAI_API_KEY` key is passed into the Daytona sandbox environment and is accessible to any code executed inside the sandbox.
:::
#### Local Usage
:::note[Node.js Version]
Node.js 18 or newer is required to run this example. Please ensure your environment meets this requirement before proceeding.
:::
Install dependencies:
```bash
npm install
```
Run the agent:
```bash
npm run start
```
The agent will start and wait for your prompt.
### 3. Understanding the Agent's Architecture
This example consists of two main components:
- **Main Program:** The main program is a Node.js script (`src/index.ts`) that runs on your local machine. It uses the Daytona SDK to create and manage a Daytona sandbox. The main program provides a command line interface for interacting with the agent inside the sandbox.
- **Sandbox Agent:** The sandbox agent is a Node.js script (`agent/index.ts`) that runs inside the Daytona sandbox. It uses the Codex SDK to create a customized coding agent.
#### Initialization
On initialization, the main program:
1. Creates a new [Daytona sandbox](https://www.daytona.io/docs/en/sandboxes.md) with your OpenAI API key included in the environment variables.
2. Configures the Codex system prompt with Daytona-specific instructions and writes it to a `.codex/config.toml` file in the sandbox.
3. Uploads the agent package to the sandbox with [file uploading](https://www.daytona.io/docs/file-system-operations.md#uploading-a-single-file).
4. Installs the agent dependencies by running `npm install` in the uploaded agent directory.
5. Waits for user input and runs the agent asynchronously for each prompt.
#### Main Program Code
Custom system prompts for Codex must be configured via a `.codex/config.toml` file, so the main program creates this file in the sandbox before starting the agent:
```typescript
const systemPrompt = [
'You are running in a Daytona sandbox.',
'Use the /home/daytona directory instead of /workspace for file operations.',
`When running services on localhost, they will be accessible as: ${previewUrlPattern}`,
].join(' ')
const config = `developer_instructions = "${systemPrompt}"`
await sandbox.fs.createFolder('.codex', '755')
await sandbox.fs.uploadFile(Buffer.from(config, 'utf8'), '.codex/config.toml')
```
This prompt instructs the agent to use the correct file paths and preview link format for Daytona sandboxes.
After installing dependencies, the main program enters a loop to read user input and send it to the agent. For each user prompt it receives, it creates a new Daytona process session to run the agent command asynchronously and stream back the output:
```typescript
// Create a session to stream the agent output
const sessionId = `codex-session-${Date.now()}`
await sandbox.process.createSession(sessionId)
// Run the agent asynchronously, passing the prompt and OpenAI API key
const command = await sandbox.process.executeSessionCommand(sessionId, {
command: `${environmentPrefix({ PROMPT: prompt })} npm exec --prefix /tmp/agent tsx -- /tmp/agent/index.ts`,
runAsync: true,
})
// Stream agent output as it arrives
if (!command.cmdId) throw new Error('Failed to start agent command in sandbox')
await sandbox.process.getSessionCommandLogs(
sessionId,
command.cmdId,
onStdout,
onStderr,
)
// Delete the session
await sandbox.process.deleteSession(sessionId)
```
The `onStdout` and `onStderr` callbacks are used to pass the agent's output back to the main program. After the agent finishes responding to the prompt, the main program waits for the next user input.
#### Sandbox Agent Code
The sandbox agent uses the [Codex SDK](https://developers.openai.com/codex/sdk/) to create a customized coding agent.
The agent is initialized with custom options that include the workspace directory:
```typescript
// Configure Codex options
const options: ThreadOptions = {
workingDirectory: '/home/daytona',
skipGitRepoCheck: true,
sandboxMode: 'danger-full-access',
}
```
The agent maintains thread state between requests by writing the thread ID to a file, allowing it to maintain context across multiple interactions:
```typescript
const threadIdPath = '/tmp/codex-thread-id'
const threadId = (await readFileIfExisting(threadIdPath))?.trim()
const thread: Thread = threadId
? codex.resumeThread(threadId, options)
: codex.startThread(options)
```
Additional code to stream agent responses follows the examples in OpenAI's [Codex SDK documentation](https://github.com/openai/codex/blob/main/sdk/typescript/README.md).
#### Clean up
When you exit the main program, the Daytona sandbox and all files are automatically deleted.
**Key advantages:**
- Secure, isolated execution in Daytona sandboxes
- Communicate with the agent directly in your terminal
- Automatic dev server detection and live preview links
- Multi-language and full-stack support
- Thread persistence across multiple requests
- Simple setup and automatic cleanup
import { Image } from 'astro:assets'
import chartImage from '../../../../assets/docs/images/chart-0.png'
You can use Daytona Sandbox to run AI-generated code to analyze data. Here's how the AI data analysis workflow typically looks:
1. Your user has a dataset in CSV format or other formats.
2. You prompt the LLM to generate code (usually Python) based on the user's data.
3. The sandbox runs the AI-generated code and returns the results.
4. The LLM receives feedback from the execution and can iterate multiple times to refine the code if needed.
5. You display the final results to the user.
---
## Build an AI Data Analyst with Daytona
This example shows how to build an AI-powered data analyst that automatically generates insights and visualizations from CSV data using Daytona's secure sandbox environment.
**What we'll build:** A system that analyzes a vehicle valuation dataset, identifies price relation to manufacturing year, and generates professional visualizations - all through natural language prompts to Claude. The system uses an agentic loop that allows Claude to iteratively refine the code based on execution results.
### 1. Project Setup
#### 1.1 Install Dependencies
Install the Daytona SDK and Anthropic SDK to your project:
`bash pip install daytona anthropic python-dotenv `
`bash npm install @daytona/sdk @anthropic-ai/sdk dotenv `
`bash gem install daytona anthropic dotenv `
#### 1.2 Configure Environment
Get your API keys and configure your environment:
1. **Daytona API key:** Get it from [Daytona Dashboard](https://app.daytona.io/dashboard/keys)
2. **Anthropic API key:** Get it from [Anthropic Console](https://console.anthropic.com/)
Create a `.env` file in your project:
```bash
DAYTONA_API_KEY=dtn_***
ANTHROPIC_API_KEY=sk-ant-***
```
### 2. Dataset Preparation
#### 2.1 Download Dataset
We'll be using a publicly available dataset of vehicle valuation. You can download it directly from:
[https://download.daytona.io/dataset.csv](https://download.daytona.io/dataset.csv)
Download the file and save it as `dataset.csv` in your project directory.
#### 2.2 Initialize Sandbox
Now create a [Daytona sandbox](https://www.daytona.io/docs/en/sandboxes.md#basic-sandbox-creation) and upload your dataset:
```python
from dotenv import load_dotenv
from daytona import Daytona
import os
load_dotenv()
# Create sandbox
daytona = Daytona() # The sandbox language is Python by default.
sandbox = daytona.create()
# Upload the dataset to the sandbox
sandbox.fs.upload_file("dataset.csv", "/home/daytona/dataset.csv")
```
```typescript
import 'dotenv/config'
import { Daytona } from '@daytona/sdk';
// Create sandbox
const daytona = new Daytona(); // The sandbox language is Python by default.
const sandbox = await daytona.create()
// Upload the dataset to the sandbox
await sandbox.fs.uploadFile('dataset.csv', '/home/daytona/dataset.csv')
```
```ruby
require 'daytona'
require 'dotenv/load'
# Create sandbox
daytona = Daytona::Daytona.new # The sandbox language is Python by default.
sandbox = daytona.create
# Upload the dataset to the sandbox
sandbox.fs.upload_file(File.read('dataset.csv'), '/home/daytona/dataset.csv')
```
### 3. Building the AI Data Analyst
Now we'll create the core functionality that connects Claude with Daytona to analyze data and generate visualizations.
#### 3.1 Code Execution Handler
First, let's create a function to handle code execution and chart extraction. This function returns execution results that can be fed back to the AI model:
```python
import base64
from typing import TypedDict
class ExecutionResult(TypedDict):
stdout: str
exit_code: int
charts: list
def run_ai_generated_code(sandbox, ai_generated_code: str) -> ExecutionResult:
execution = sandbox.process.code_run(ai_generated_code)
result = ExecutionResult(
stdout=execution.result or "",
exit_code=execution.exit_code,
charts=execution.artifacts.charts if execution.artifacts else []
)
# Save any charts that were generated
if execution.artifacts and execution.artifacts.charts:
result_idx = 0
for chart in execution.artifacts.charts:
if chart.png:
filename = f'chart-{result_idx}.png'
with open(filename, 'wb') as f:
f.write(base64.b64decode(chart.png))
print(f'✓ Chart saved to {filename}')
result_idx += 1
return result
```
```typescript
import fs from 'fs'
import { Sandbox } from '@daytona/sdk'
interface ExecutionResult {
stdout: string
exitCode: number
charts?: Array<{ png?: string }>
}
async function runAIGeneratedCode(
sandbox: Sandbox,
aiGeneratedCode: string
): Promise {
const execution = await sandbox.process.codeRun(aiGeneratedCode)
const result: ExecutionResult = {
stdout: execution.result || "",
exitCode: execution.exitCode,
charts: execution.artifacts?.charts
}
// Save any charts that were generated
if (execution.artifacts?.charts) {
let resultIdx = 0
for (const chart of execution.artifacts.charts) {
if (chart.png) {
const filename = `chart-${resultIdx}.png`
fs.writeFileSync(filename, chart.png, { encoding: 'base64' })
console.log(`✓ Chart saved to ${filename}`)
resultIdx++
}
}
}
return result
}
```
#### 3.2 Creating the Analysis Prompt
Next, we'll create the prompt that tells Claude about our dataset and what analysis we want. This prompt includes:
- Dataset schema and column descriptions
- The specific analysis request (vehicle price variation by manufacturing year)
- Instructions for code generation
```python
from anthropic import Anthropic
prompt = f"""
I have a CSV file with vehicle valuations saved in the sandbox at /home/daytona/dataset.csv.
Relevant columns:
- 'year': integer, the manufacturing year of the vehicle
- 'price_in_euro': float, the listed price of the vehicle in Euros
Analyze how price varies by manufacturing year.
Drop rows where 'year' or 'price_in_euro' is missing, non-numeric, or an outlier.
Create a line chart showing average price per year.
Write Python code that analyzes the dataset based on my request and produces a matplotlib chart accordingly.
Always finish with plt.show() to display the chart."""
anthropic = Anthropic()
```
```typescript
import Anthropic from '@anthropic-ai/sdk'
const prompt = `
I have a CSV file with vehicle valuations saved in the sandbox at /home/daytona/dataset.csv.
Relevant columns:
- 'year': integer, the manufacturing year of the vehicle
- 'price_in_euro': float, the listed price of the vehicle in Euros
Analyze how price varies by manufacturing year.
Drop rows where 'year' or 'price_in_euro' is missing, non-numeric, or an outlier.
Create a line chart showing average price per year.
Write Python code that analyzes the dataset based on my request and produces a matplotlib chart accordingly.
Always finish with plt.show() to display the chart.`
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY })
```
#### 3.3 Tool Definition
Define the tool that allows Claude to execute Python code in the sandbox:
```python
tools = [
{
'name': 'run_python_code',
'description': 'Run Python code in the sandbox environment and get execution results',
'input_schema': {
'type': 'object',
'properties': {
'code': {
'type': 'string',
'description': 'The Python code to run',
},
},
'required': ['code'],
},
},
]
```
```typescript
import type { Tool, ToolUseBlock } from '@anthropic-ai/sdk/resources/messages.mjs'
const tools: Tool[] = [
{
name: 'run_python_code',
description: 'Run Python code in the sandbox environment and get execution results',
input_schema: {
type: 'object',
properties: {
code: {
type: 'string',
description: 'The Python code to run',
},
},
required: ['code'],
},
},
]
```
#### 3.4 Agentic Loop Implementation
Now we'll implement the agentic loop that allows Claude to iteratively refine the code based on execution feedback. This enables Claude to fix errors, handle edge cases, and improve the analysis through multiple iterations:
```python
# Initialize conversation history
messages = [{'role': 'user', 'content': prompt}]
continue_loop = True
iteration_count = 0
max_iterations = 10
print("Starting agentic loop...\n")
while continue_loop and iteration_count < max_iterations:
iteration_count += 1
print(f"\n=== Iteration {iteration_count} ===")
print("Waiting for the model response...")
# Get response from Claude
msg = anthropic.messages.create(
model='claude-sonnet-4-5',
max_tokens=64000,
messages=messages,
tools=tools
)
# Log Claude's text response
for content_block in msg.content:
if content_block.type == 'text':
print("\nClaude's response:")
print(content_block.text)
# Check if Claude wants to use any tools
tool_uses = [block for block in msg.content if block.type == 'tool_use']
if len(tool_uses) == 0:
# No more tool uses, Claude is done
print("\nTask completed - no more actions needed.")
continue_loop = False
break
# Add Claude's response to message history
messages.append({'role': 'assistant', 'content': msg.content})
# Execute all tool calls and collect results
tool_results = []
for tool_use in tool_uses:
if tool_use.name == 'run_python_code':
code = tool_use.input['code']
print("\n--- Executing Python code in sandbox ---")
print(code)
print("--- End of code ---\n")
# Execute the code in the sandbox
execution_result = run_ai_generated_code(sandbox, code)
# Format the tool result
result_content = ""
if execution_result['exit_code'] == 0:
result_content += "Execution successful!\n\n"
if execution_result['stdout']:
result_content += f"Output:\n{execution_result['stdout']}\n"
if execution_result['charts'] and len(execution_result['charts']) > 0:
result_content += f"\nGenerated {len(execution_result['charts'])} chart(s)."
else:
result_content += "\nNote: No charts were generated. Make sure to use plt.show() to display the chart."
else:
result_content += f"Execution failed with exit code {execution_result['exit_code']}\n\n"
if execution_result['stdout']:
result_content += f"Output:\n{execution_result['stdout']}\n"
tool_results.append({
'type': 'tool_result',
'tool_use_id': tool_use.id,
'content': result_content
})
print("Execution result sent back to Claude.")
# Add tool results to conversation history
messages.append({'role': 'user', 'content': tool_results})
if iteration_count >= max_iterations:
print("\n⚠️ Reached maximum iteration limit. Task may not be complete.")
print("\n=== Agentic loop completed ===")
```
```typescript
import type { MessageParam } from '@anthropic-ai/sdk/resources/messages.mjs'
interface CodeRunToolInput {
code: string
}
// Initialize conversation history
const messages: MessageParam[] = [
{ role: 'user', content: initialPrompt }
]
let continueLoop = true
let iterationCount = 0
const maxIterations = 10
console.log("Starting agentic loop...\n")
while (continueLoop && iterationCount < maxIterations) {
iterationCount++
console.log(`\n=== Iteration ${iterationCount} ===`)
console.log("Waiting for the model response...")
// Get response from Claude
const stream = anthropic.messages.stream({
model: 'claude-sonnet-4-5',
max_tokens: 64000,
messages: messages,
tools: tools
})
const message = await stream.finalMessage()
// Log Claude's text response
for (const contentBlock of message.content) {
if (contentBlock.type === 'text') {
console.log("\nClaude's response:")
console.log(contentBlock.text)
}
}
// Check if Claude wants to use any tools
const toolUses = message.content.filter(
(block): block is ToolUseBlock => block.type === 'tool_use'
)
if (toolUses.length === 0) {
// No more tool uses, Claude is done
console.log("\nTask completed - no more actions needed.")
continueLoop = false
break
}
// Add Claude's response to message history
messages.push({
role: 'assistant',
content: message.content
})
// Execute all tool calls and collect results
const toolResults = []
for (const toolUse of toolUses) {
if (toolUse.name === 'run_python_code') {
const code = (toolUse.input as CodeRunToolInput).code
console.log("\n--- Executing Python code in sandbox ---")
console.log(code)
console.log("--- End of code ---\n")
// Execute the code in the sandbox
const executionResult = await runAIGeneratedCode(sandbox, code)
// Format the tool result
let resultContent = ""
if (executionResult.exitCode === 0) {
resultContent += "Execution successful!\n\n"
if (executionResult.stdout) {
resultContent += `Output:\n${executionResult.stdout}\n`
}
if (executionResult.charts && executionResult.charts.length > 0) {
resultContent += `\nGenerated ${executionResult.charts.length} chart(s).`
} else {
resultContent += "\nNote: No charts were generated. Make sure to use plt.show() to display the chart."
}
} else {
resultContent += `Execution failed with exit code ${executionResult.exitCode}\n\n`
if (executionResult.stdout) {
resultContent += `Output:\n${executionResult.stdout}\n`
}
}
toolResults.push({
type: 'tool_result' as const,
tool_use_id: toolUse.id,
content: resultContent
})
console.log("Execution result sent back to Claude.")
}
}
// Add tool results to conversation history
messages.push({
role: 'user',
content: toolResults
})
}
if (iterationCount >= maxIterations) {
console.log("\n⚠️ Reached maximum iteration limit. Task may not be complete.")
}
console.log("\n=== Agentic loop completed ===")
```
The agentic loop works as follows:
1. **Initial Request**: Send the initial prompt to Claude with the tool definition
2. **Iteration Loop**: For each iteration (up to 10 times):
- Claude generates a response with optional tool calls
- If there are tool calls, execute the Python code in the sandbox
- Send execution results back to Claude (including errors or success messages)
- Claude can then refine the code based on the feedback
3. **Completion**: Loop ends when Claude signals no more tool calls are needed or max iterations reached
This approach allows Claude to:
- Fix errors if the initial code fails
- Iterate on the analysis if results aren't satisfactory
- Handle edge cases discovered during execution
- Improve visualizations based on the actual data
**Key advantages of this approach:**
- **Secure execution:** Code runs in isolated Daytona sandboxes
- **Automatic artifact capture:** Charts, tables, and outputs are automatically extracted
- **Error handling:** Built-in error detection and logging
- **Language agnostic:** While we used Python here, Daytona supports multiple languages
### 4. Running Your Analysis
Now you can run the complete code to see the results.
```bash
python data-analysis.py
```
```bash
npx tsx data-analysis.ts
```
You should see the chart in your project directory that will look similar to this:
### 5. Complete Implementation
Here are the complete, ready-to-run examples with the agentic loop:
```python
import base64
from dotenv import load_dotenv
from daytona import Daytona, Sandbox
from anthropic import Anthropic
from typing import TypedDict
class ExecutionResult(TypedDict):
stdout: str
exit_code: int
charts: list
def main():
load_dotenv()
# Create sandbox
daytona = Daytona()
sandbox = daytona.create()
# Upload the dataset to the sandbox
sandbox.fs.upload_file("dataset.csv", "/home/daytona/dataset.csv")
initial_prompt = """
I have a CSV file with vehicle valuations saved in the sandbox at /home/daytona/dataset.csv.
Relevant columns:
- 'year': integer, the manufacturing year of the vehicle
- 'price_in_euro': float, the listed price of the vehicle in Euros
Analyze how price varies by manufacturing year.
Drop rows where 'year' or 'price_in_euro' is missing, non-numeric, or an outlier.
Create a line chart showing average price per year.
Write Python code that analyzes the dataset based on my request and produces a matplotlib chart accordingly.
Always finish with plt.show() to display the chart."""
anthropic = Anthropic()
tools = [
{
'name': 'run_python_code',
'description': 'Run Python code in the sandbox environment and get execution results',
'input_schema': {
'type': 'object',
'properties': {
'code': {
'type': 'string',
'description': 'The Python code to run',
},
},
'required': ['code'],
},
},
]
# Initialize conversation history
messages = [{'role': 'user', 'content': initial_prompt}]
continue_loop = True
iteration_count = 0
max_iterations = 10
print("Starting agentic loop...\n")
while continue_loop and iteration_count < max_iterations:
iteration_count += 1
print(f"\n=== Iteration {iteration_count} ===")
print("Waiting for the model response...")
# Get response from Claude
msg = anthropic.messages.create(
model='claude-sonnet-4-5',
max_tokens=64000,
messages=messages,
tools=tools
)
# Log Claude's text response
for content_block in msg.content:
if content_block.type == 'text':
print("\nClaude's response:")
print(content_block.text)
# Check if Claude wants to use any tools
tool_uses = [block for block in msg.content if block.type == 'tool_use']
if len(tool_uses) == 0:
# No more tool uses, Claude is done
print("\nTask completed - no more actions needed.")
continue_loop = False
break
# Add Claude's response to message history
messages.append({'role': 'assistant', 'content': msg.content})
# Execute all tool calls and collect results
tool_results = []
for tool_use in tool_uses:
if tool_use.name == 'run_python_code':
code = tool_use.input['code']
print("\n--- Executing Python code in sandbox ---")
print(code)
print("--- End of code ---\n")
# Execute the code in the sandbox
execution_result = run_ai_generated_code(sandbox, code)
# Format the tool result
result_content = ""
if execution_result['exit_code'] == 0:
result_content += "Execution successful!\n\n"
if execution_result['stdout']:
result_content += f"Output:\n{execution_result['stdout']}\n"
if execution_result['charts'] and len(execution_result['charts']) > 0:
result_content += f"\nGenerated {len(execution_result['charts'])} chart(s)."
else:
result_content += "\nNote: No charts were generated. Make sure to use plt.show() to display the chart."
else:
result_content += f"Execution failed with exit code {execution_result['exit_code']}\n\n"
if execution_result['stdout']:
result_content += f"Output:\n{execution_result['stdout']}\n"
tool_results.append({
'type': 'tool_result',
'tool_use_id': tool_use.id,
'content': result_content
})
print("Execution result sent back to Claude.")
# Add tool results to conversation history
messages.append({'role': 'user', 'content': tool_results})
if iteration_count >= max_iterations:
print("\n⚠️ Reached maximum iteration limit. Task may not be complete.")
print("\n=== Agentic loop completed ===")
def run_ai_generated_code(sandbox: Sandbox, ai_generated_code: str) -> ExecutionResult:
execution = sandbox.process.code_run(ai_generated_code)
result = ExecutionResult(
stdout=execution.result or "",
exit_code=execution.exit_code,
charts=execution.artifacts.charts if execution.artifacts else []
)
# Save any charts that were generated
if execution.artifacts and execution.artifacts.charts:
result_idx = 0
for chart in execution.artifacts.charts:
if chart.png:
filename = f'chart-{result_idx}.png'
with open(filename, 'wb') as f:
f.write(base64.b64decode(chart.png))
print(f'✓ Chart saved to {filename}')
result_idx += 1
return result
if __name__ == "__main__":
main()
```
```typescript
import "dotenv/config";
import fs from "fs";
import Anthropic from "@anthropic-ai/sdk";
import { Daytona, Sandbox } from "@daytona/sdk";
import type {
MessageParam,
Tool,
ToolUseBlock,
} from "@anthropic-ai/sdk/resources/messages.mjs";
interface CodeRunToolInput {
code: string;
}
interface ExecutionResult {
stdout: string;
exitCode: number;
charts?: Array<{ png?: string }>;
}
async function main() {
// Create sandbox
const daytona = new Daytona();
const sandbox = await daytona.create();
// Upload the dataset to the sandbox
await sandbox.fs.uploadFile("dataset.csv", "/home/daytona/dataset.csv");
const initialPrompt = `
I have a CSV file with vehicle valuations saved in the sandbox at /home/daytona/dataset.csv.
Relevant columns:
- 'year': integer, the manufacturing year of the vehicle
- 'price_in_euro': float, the listed price of the vehicle in Euros
Analyze how price varies by manufacturing year.
Drop rows where 'year' or 'price_in_euro' is missing, non-numeric, or an outlier.
Create a line chart showing average price per year.
Write Python code that analyzes the dataset based on my request and produces a matplotlib chart accordingly.
Always finish with plt.show() to display the chart.`;
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
const tools: Tool[] = [
{
name: "run_python_code",
description: "Run Python code in the sandbox environment and get execution results",
input_schema: {
type: "object",
properties: {
code: {
type: "string",
description: "The Python code to run",
},
},
required: ["code"],
},
},
];
// Initialize conversation history
const messages: MessageParam[] = [
{ role: "user", content: initialPrompt },
];
let continueLoop = true;
let iterationCount = 0;
const maxIterations = 10;
console.log("Starting agentic loop...\n");
while (continueLoop && iterationCount < maxIterations) {
iterationCount++;
console.log(`\n=== Iteration ${iterationCount} ===`);
console.log("Waiting for the model response...");
// Get response from Claude
const stream = anthropic.messages.stream({
model: "claude-sonnet-4-5",
max_tokens: 64000,
messages: messages,
tools: tools,
});
const message = await stream.finalMessage();
// Log Claude's text response
for (const contentBlock of message.content) {
if (contentBlock.type === "text") {
console.log("\nClaude's response:");
console.log(contentBlock.text);
}
}
// Check if Claude wants to use any tools
const toolUses = message.content.filter(
(block): block is ToolUseBlock => block.type === "tool_use"
);
if (toolUses.length === 0) {
// No more tool uses, Claude is done
console.log("\nTask completed - no more actions needed.");
continueLoop = false;
break;
}
// Add Claude's response to message history
messages.push({
role: "assistant",
content: message.content,
});
// Execute all tool calls and collect results
const toolResults = [];
for (const toolUse of toolUses) {
if (toolUse.name === "run_python_code") {
const code = (toolUse.input as CodeRunToolInput).code;
console.log("\n--- Executing Python code in sandbox ---");
console.log(code);
console.log("--- End of code ---\n");
// Execute the code in the sandbox
const executionResult = await runAIGeneratedCode(sandbox, code);
// Format the tool result
let resultContent = "";
if (executionResult.exitCode === 0) {
resultContent += `Execution successful!\n\n`;
if (executionResult.stdout) {
resultContent += `Output:\n${executionResult.stdout}\n`;
}
if (executionResult.charts && executionResult.charts.length > 0) {
resultContent += `\nGenerated ${executionResult.charts.length} chart(s).`;
} else {
resultContent += `\nNote: No charts were generated. Make sure to use plt.show() to display the chart.`;
}
} else {
resultContent += `Execution failed with exit code ${executionResult.exitCode}\n\n`;
if (executionResult.stdout) {
resultContent += `Output:\n${executionResult.stdout}\n`;
}
}
toolResults.push({
type: "tool_result" as const,
tool_use_id: toolUse.id,
content: resultContent,
});
console.log("Execution result sent back to Claude.");
}
}
// Add tool results to conversation history
messages.push({
role: "user",
content: toolResults,
});
}
if (iterationCount >= maxIterations) {
console.log(
"\n⚠️ Reached maximum iteration limit. Task may not be complete."
);
}
console.log("\n=== Agentic loop completed ===");
}
async function runAIGeneratedCode(
sandbox: Sandbox,
aiGeneratedCode: string
): Promise {
const execution = await sandbox.process.codeRun(aiGeneratedCode);
const result: ExecutionResult = {
stdout: execution.result || "",
exitCode: execution.exitCode,
charts: execution.artifacts?.charts,
};
// Save any charts that were generated
if (execution.artifacts?.charts) {
let resultIdx = 0;
for (const chart of execution.artifacts.charts) {
if (chart.png) {
const filename = `chart-${resultIdx}.png`;
fs.writeFileSync(filename, chart.png, {
encoding: "base64",
});
console.log(`✓ Chart saved to ${filename}`);
resultIdx++;
}
}
}
return result;
}
main().catch(console.error);
```
This guide demonstrates how to use the `DaytonaPlugin` for Google ADK to build an agent that generates, tests, and verifies code in a secure sandbox environment. The plugin enables agents to execute Python, JavaScript, and TypeScript code, run shell commands, and manage files within isolated Daytona sandboxes.
In this example, we build a code generator agent that takes a natural language description of a function, generates the implementation in TypeScript, creates test cases, executes them in the sandbox, and iterates until all tests pass before returning the verified code.
---
### 1. Workflow Overview
You describe the function you want in plain English, specifying the language (Python, JavaScript, or TypeScript). The agent generates the implementation, writes tests for it, and executes everything in a Daytona sandbox. If tests fail, the agent automatically fixes the code and re-runs until all tests pass. Only then does it return the verified, working code.
The key benefit: you receive code that has already been tested and verified, not just generated.
### 2. Project Setup
#### Clone the Repository
Clone the Daytona repository and navigate to the example directory:
```bash
git clone https://github.com/daytonaio/daytona
cd daytona/guides/python/google-adk/code-generator-agent/gemini
```
#### Install Dependencies
:::note[Python Version Requirement]
This example requires **Python 3.10 or higher**. It's recommended to use a virtual environment (e.g., `venv` or `poetry`) to isolate project dependencies.
:::
Install the required packages for this example:
```bash
pip install -U google-adk daytona-adk python-dotenv
```
The packages include:
- `google-adk`: Google's Agent Development Kit for building AI agents
- `daytona-adk`: Provides the `DaytonaPlugin` that enables secure code execution in Daytona sandboxes
- `python-dotenv`: Used for loading environment variables from `.env` file
#### Configure Environment
Get your API keys and configure your environment:
1. **Daytona API key:** Get it from [Daytona Dashboard](https://app.daytona.io/dashboard/keys)
2. **Google API key:** Get it from [Google AI Studio](https://aistudio.google.com/apikey)
Create a `.env` file in your project:
```bash
DAYTONA_API_KEY=dtn_***
GOOGLE_API_KEY=***
```
### 3. Understanding the Core Components
Before diving into the implementation, let's understand the key components we'll use:
#### Google ADK Components
- **Agent**: The AI model wrapper that processes requests and decides which tools to use. It receives instructions, has access to tools, and generates responses.
- **App**: A top-level container that bundles agents with plugins into a single configuration unit. It provides centralized management for shared resources and defines the root agent for your workflow.
- **InMemoryRunner**: The execution engine that runs agents and manages conversation state. It orchestrates the event-driven execution loop, handles message processing, and manages services like session history and artifact storage.
:::note[Running the Agent]
There are two ways to run Google ADK agents: using the `App` class with `InMemoryRunner`, or using `InMemoryRunner` directly with just an agent. The `App` serves as a configuration container that bundles agents with plugins, while the `Runner` handles actual execution and lifecycle management. This guide uses the `App` approach for cleaner organization of agents and plugins.
:::
#### Daytona Plugin
The `DaytonaPlugin` provides tools that allow the agent to:
- Execute code in Python, JavaScript, or TypeScript
- Run shell commands
- Upload and read files
- Start long-running background processes
All operations happen in an isolated sandbox that is automatically cleaned up when done.
### 4. Initialize Environment and Imports
First, we set up our imports and load environment variables:
```python
import asyncio
import logging
from dotenv import load_dotenv
from google.adk.agents import Agent
from google.adk.apps import App
from google.adk.runners import InMemoryRunner
from daytona_adk import DaytonaPlugin
load_dotenv()
logging.basicConfig(level=logging.DEBUG)
```
**What each import does:**
- `asyncio`: Required for running the async ADK runner
- `logging`: Enables debug output to see agent reasoning
- `load_dotenv`: Loads API keys from your `.env` file
- `Agent`, `App`, `InMemoryRunner`: Core Google ADK components
- `DaytonaPlugin`: Provides sandbox execution tools to the agent
**Logging configuration:**
The `logging.basicConfig(level=logging.DEBUG)` line configures Python's logging to show detailed debug output. You can adjust the logging level by passing different values:
- `logging.DEBUG`: Most verbose, shows all internal operations including DaytonaPlugin sandbox creation and tool invocations
- `logging.INFO`: Shows informational messages about agent progress
- `logging.WARNING`: Shows only warnings and errors
- `logging.ERROR`: Shows only errors
:::tip[Behind the Scenes]
With `DEBUG` level logging enabled, you can see the DaytonaPlugin's internal operations, including when the sandbox is created, when the `execute_code_in_daytona` tool is invoked, and when cleanup occurs. The plugin's `plugin_name` (configurable, defaults to `daytona_plugin`) appears in these log messages, making it easy to trace plugin activity.
:::
### 5. Define the Response Extractor
The ADK runner returns a list of events from the agent's execution. We need a helper function to extract the final text response:
```python
def extract_final_response(response: list) -> str:
"""Extract the final text response from a list of ADK events."""
for event in reversed(response):
text_parts = []
if hasattr(event, "text") and event.text:
return event.text
if hasattr(event, "content") and event.content:
content = event.content
if hasattr(content, "parts") and content.parts:
for part in content.parts:
if hasattr(part, "text") and part.text:
text_parts.append(part.text)
if text_parts:
return "".join(text_parts)
if hasattr(content, "text") and content.text:
return content.text
if isinstance(event, dict):
text = event.get("text") or event.get("content", {}).get("text")
if text:
return text
return ""
```
This function iterates through events in reverse order to find the last text response. It handles multiple possible event structures that the ADK may return.
### 6. Define the Agent Instruction
The instruction is critical - it defines how the agent behaves. Our instruction enforces a test-driven workflow:
```python
AGENT_INSTRUCTION = """You are a code generator agent that writes verified, working code.
You support Python, JavaScript, and TypeScript.
Your workflow for every code request:
1. Write the function
2. Write tests for it
3. EXECUTE the code in the sandbox to verify it works - do not skip this step
4. If execution fails, fix and re-execute until tests pass
5. Once verified, respond with ONLY the function (no tests)
You must always execute code before responding. Never return untested code.
Only include tests in your response if the user explicitly asks for them.
"""
```
**Key aspects of this instruction:**
- **Enforces execution**: The agent must run code in the sandbox before responding
- **Iterative fixing**: If tests fail, the agent fixes and retries
- **Controlled output**: By default, the final response contains only the working function. If you want to see the tests, include an instruction to return them in your prompt.
- **Multi-language**: Supports Python, JavaScript, and TypeScript
### 7. Configure the Daytona Plugin
Initialize the plugin that provides sandbox execution capabilities:
```python
plugin = DaytonaPlugin(
labels={"example": "code-generator"},
)
```
**Configuration options:**
- `labels`: Custom metadata tags for the sandbox (useful for tracking/filtering)
- `api_key`: Daytona API key (defaults to `DAYTONA_API_KEY` env var)
- `sandbox_name`: Custom name for the sandbox
- `plugin_name`: Name displayed in logs when the plugin logs messages (defaults to `daytona_plugin`)
- `env_vars`: Environment variables to set in the sandbox
- `auto_stop_interval`: Minutes before auto-stop (default: 15)
- `auto_delete_interval`: Minutes before auto-delete (disabled by default)
### 8. Create the Agent
Create the agent with the Gemini model, our instruction, and the Daytona tools:
```python
agent = Agent(
model="gemini-2.5-pro",
name="code_generator_agent",
instruction=AGENT_INSTRUCTION,
tools=plugin.get_tools(),
)
```
**Parameters explained:**
- `model`: The Gemini model to use for reasoning and code generation
- `name`: Identifier for the agent
- `instruction`: The behavioral guidelines we defined
- `tools`: List of tools from the Daytona plugin that the agent can use
### 9. Create the App and Runner
Bundle the agent and plugin into an App, then run it:
```python
app = App(
name="code_generator_app",
root_agent=agent,
plugins=[plugin],
)
async with InMemoryRunner(app=app) as runner:
prompt = "Write a TypeScript function called 'groupBy' that takes an array and a key function, and groups array elements by the key. Use proper type annotations."
response = await runner.run_debug(prompt)
final_response = extract_final_response(response)
print(final_response)
```
**What happens here:**
1. The `App` bundles the agent with the plugin for proper lifecycle management
2. `InMemoryRunner` is used as an async context manager (the `async with` statement). A context manager in Python automatically handles setup and cleanup - when the code enters the `async with` block, the runner initializes; when it exits (either normally or due to an error), the runner cleans up resources.
3. `run_debug` sends the prompt and returns all execution events
4. The sandbox is automatically deleted when the `async with` block exits - this cleanup happens regardless of whether the code completed successfully or raised an exception
### 10. Running the Example
Run the complete example:
```bash
python main.py
```
#### Understanding the Agent's Execution Flow
When you run the code, the agent works through your request step by step. With `logging.DEBUG` enabled, you'll see detailed output including:
- **DaytonaPlugin operations**: Sandbox creation, tool invocations (`execute_code_in_daytona`), and cleanup
- **LLM requests and responses**: The prompts sent to Gemini and the responses received
- **Plugin registration**: Confirmation that the `daytona_plugin` was registered with the agent
Here's what the debug output reveals about each step:
**Step 1: Sandbox Creation**
```
DEBUG:daytona_adk.plugin:Daytona sandbox created: e38f8574-48ac-48f1-a0ff-d922d02b0fcb
INFO:google_adk.google.adk.plugins.plugin_manager:Plugin 'daytona_plugin' registered.
```
The DaytonaPlugin creates an isolated sandbox and registers itself with the agent.
**Step 2: Agent receives the request**
The agent receives your prompt and understands it needs to create a TypeScript `groupBy` function with proper type annotations.
**Step 3: Agent generates code and tests**
The agent writes both the implementation and test cases, then calls the `execute_code_in_daytona` tool:
```
DEBUG:google_adk.google.adk.models.google_llm:
LLM Response:
-----------------------------------------------------------
Function calls:
name: execute_code_in_daytona, args: {'code': "...", 'language': 'typescript'}
```
**Step 4: Code execution in sandbox**
```
DEBUG:daytona_adk.plugin:Before tool: execute_code_in_daytona
DEBUG:daytona_adk.tools:Executing typescript code (length: 1570 chars)
DEBUG:daytona_adk.tools:Code execution completed with exit_code: 0
DEBUG:daytona_adk.plugin:After tool: execute_code_in_daytona
```
The plugin executes the code in the isolated TypeScript environment and returns the result.
**Step 5: Agent iterates if needed**
If tests fail (exit_code != 0), the agent analyzes the error, fixes the code, and re-executes until all tests pass.
**Step 6: Agent returns verified code**
Once tests pass, the agent responds with only the working function. If you included an instruction to return tests in your prompt, the tests will also be included in the response.
**Step 7: Cleanup**
```
INFO:daytona_adk.plugin:Deleting Daytona sandbox...
INFO:daytona_adk.plugin:Daytona sandbox deleted.
INFO:google_adk.google.adk.runners:Runner closed.
```
When the context manager exits, the sandbox is automatically deleted.
#### Example Output
When the agent completes the task, you'll see output like:
````
AGENT RESPONSE:
------------------------------------------------------------
```typescript
function groupBy(
array: T[],
keyFn: (item: T) => K
): Record {
return array.reduce((result, item) => {
const key = keyFn(item);
if (!result[key]) {
result[key] = [];
}
result[key].push(item);
return result;
}, {} as Record);
}
```
============================================================
App closed, sandbox cleaned up. Done!
````
The agent has already tested this code in the sandbox before returning it, so you can trust that the implementation works correctly.
#### Requesting Tests in the Response
If you want to see the tests that were executed in the sandbox, include an instruction to return them in your prompt:
```python
prompt = "Write a TypeScript function called 'groupBy' that takes an array and a key function, and groups array elements by the key. Use proper type annotations. Return the tests also in a separate code block"
```
With this prompt, the agent will return both the function and the tests:
````
```typescript
function groupBy(
array: T[],
keyFn: (item: T) => K
): Record {
return array.reduce((result, item) => {
const key = keyFn(item);
if (!result[key]) {
result[key] = [];
}
result[key].push(item);
return result;
}, {} as Record);
}
```
```typescript
import { deepStrictEqual } from 'assert';
// Test case 1: Group by a property of an object
const array1 = [
{ id: 1, category: 'A' },
{ id: 2, category: 'B' },
{ id: 3, category: 'A' },
];
const result1 = groupBy(array1, (item) => item.category);
deepStrictEqual(result1, {
A: [
{ id: 1, category: 'A' },
{ id: 3, category: 'A' },
],
B: [{ id: 2, category: 'B' }],
});
// Test case 2: Group by length of strings
const array2 = ['apple', 'banana', 'cherry', 'date'];
const result2 = groupBy(array2, (item) => item.length);
deepStrictEqual(result2, {
5: ['apple'],
6: ['banana', 'cherry'],
4: ['date'],
});
console.log('All tests passed!');
```
````
### 11. Complete Implementation
Here is the complete, ready-to-run example with additional output formatting for better readability:
```python
"""Code Generator & Tester Agent Example."""
import asyncio
import logging
from dotenv import load_dotenv
from google.adk.agents import Agent
from google.adk.apps import App
from google.adk.runners import InMemoryRunner
from daytona_adk import DaytonaPlugin
load_dotenv()
logging.basicConfig(level=logging.DEBUG)
def extract_final_response(response: list) -> str:
"""Extract the final text response from a list of ADK events."""
for event in reversed(response):
text_parts = []
if hasattr(event, "text") and event.text:
return event.text
if hasattr(event, "content") and event.content:
content = event.content
if hasattr(content, "parts") and content.parts:
for part in content.parts:
if hasattr(part, "text") and part.text:
text_parts.append(part.text)
if text_parts:
return "".join(text_parts)
if hasattr(content, "text") and content.text:
return content.text
if isinstance(event, dict):
text = event.get("text") or event.get("content", {}).get("text")
if text:
return text
return ""
AGENT_INSTRUCTION = """You are a code generator agent that writes verified, working code.
You support Python, JavaScript, and TypeScript.
Your workflow for every code request:
1. Write the function
2. Write tests for it
3. EXECUTE the code in the sandbox to verify it works - do not skip this step
4. If execution fails, fix and re-execute until tests pass
5. Once verified, respond with ONLY the function (no tests)
You must always execute code before responding. Never return untested code.
Only include tests in your response if the user explicitly asks for them.
"""
async def main() -> None:
"""Run the code generator agent example."""
plugin = DaytonaPlugin(
labels={"example": "code-generator"},
)
agent = Agent(
model="gemini-2.5-pro",
name="code_generator_agent",
instruction=AGENT_INSTRUCTION,
tools=plugin.get_tools(),
)
app = App(
name="code_generator_app",
root_agent=agent,
plugins=[plugin],
)
async with InMemoryRunner(app=app) as runner:
prompt = "Write a TypeScript function called 'groupBy' that takes an array and a key function, and groups array elements by the key. Use proper type annotations."
print("\n" + "=" * 60)
print("USER PROMPT:")
print("=" * 60)
print(prompt)
print("-" * 60)
response = await runner.run_debug(prompt)
final_response = extract_final_response(response)
print("\nAGENT RESPONSE:")
print("-" * 60)
print(final_response)
print("=" * 60)
print("\nApp closed, sandbox cleaned up. Done!")
if __name__ == "__main__":
asyncio.run(main())
```
**Key advantages of this approach:**
- **Verified code:** Every response has been tested in a real execution environment
- **Secure execution:** Code runs in isolated Daytona sandboxes, not on your machine
- **Multi-language support:** Generate and test Python, JavaScript, or TypeScript
- **Automatic iteration:** Agent fixes issues until tests pass
- **Flexible output:** Returns only the working function by default, or includes tests if explicitly requested in the prompt
### 12. API Reference
For the complete API reference of the Daytona ADK plugin, including all available tools and configuration options, see the [daytona-adk documentation](https://github.com/daytonaio/daytona-adk-plugin#available-tools).
Daytona provides a comprehensive set of guides to help you get started.
The guides cover a wide range of topics, from basic usage to advanced topics, and showcase various types of integrations between Daytona and other tools.
import { Image } from 'astro:assets'
import chartImage from '../../../../../assets/docs/images/langchain-data-analysis-chart.png'
This package provides the `DaytonaDataAnalysisTool` - LangChain tool integration that enables agents to perform secure Python data analysis in a sandboxed environment. It supports multi-step workflows, file uploads/downloads, and custom result handling, making it ideal for automating data analysis tasks with LangChain agents.
This page demonstrates the use of this tool with a basic example analyzing a vehicle valuations dataset. Our goal is to analyze how vehicle prices vary by manufacturing year and create a line chart showing average price per year.
---
### 1. Workflow Overview
You upload your dataset and provide a natural language prompt describing the analysis you want. The agent reasons about your request, determines how to use the `DaytonaDataAnalysisTool` to perform the task on your dataset, and executes the analysis securely in a Daytona sandbox.
You provide the data and describe what insights you need - the agent handles the rest.
### 2. Project Setup
#### Clone the Repository
Clone the [repository](https://github.com/daytonaio/daytona) and navigate to the example directory:
```bash
git clone https://github.com/daytonaio/daytona.git
cd daytona/guides/python/langchain/data-analysis/anthropic
```
#### Install Dependencies
:::note[Python Version Requirement]
This example requires **Python 3.10 or higher** because it uses LangChain 1.0+ syntax. It's recommended to use a virtual environment (e.g., `venv` or `poetry`) to isolate project dependencies.
:::
Install the required packages for this example:
```bash
pip install -U langchain langchain-anthropic langchain-daytona-data-analysis python-dotenv
```
The packages include:
- `langchain`: LangChain framework for building AI agents
- `langchain-anthropic`: Integration package connecting Claude (Anthropic) APIs and LangChain
- `langchain-daytona-data-analysis`: Provides the `DaytonaDataAnalysisTool` for LangChain agents
- `python-dotenv`: Used for loading environment variables from `.env` file
#### Configure Environment
Get your API keys and configure your environment:
1. **Daytona API key:** Get it from [Daytona Dashboard](https://app.daytona.io/dashboard/keys)
2. **Anthropic API key:** Get it from [Anthropic Console](https://console.anthropic.com/)
Create a `.env` file in your project:
```bash
DAYTONA_API_KEY=dtn_***
ANTHROPIC_API_KEY=sk-ant-***
```
### 3. Download Dataset
We'll be using a publicly available dataset of vehicle valuation. You can download it directly from:
[https://download.daytona.io/dataset.csv](https://download.daytona.io/dataset.csv)
Download the file and save it as `dataset.csv` in your project directory.
### 4. Initialize the Language Model
Models are the reasoning engine of LangChain agents - they drive decision-making, determine which tools to call, and interpret results.
In this example, we'll use Anthropic's Claude model, which excels at code generation and analytical tasks.
Configure the Claude model with the following parameters:
```python
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model_name="claude-sonnet-4-5-20250929",
temperature=0,
timeout=None,
max_retries=2,
stop=None
)
```
**Parameters explained:**
- `model_name`: Specifies the Claude model to use
- `temperature`: Tunes the degree of randomness in generation
- `max_retries`: Number of retries allowed for Anthropic API requests
:::tip[Learn More About Models]
For detailed information about LangChain models, different providers, and how to choose the right model for your use case, visit the [LangChain Models documentation](https://docs.langchain.com/oss/python/langchain/models).
:::
### 5. Define the Result Handler
When the agent executes Python code in the sandbox, it generates artifacts like charts and output logs. We can define a handler function to process these results.
This function will extract chart data from the execution artifacts and save them as PNG files:
```python
import base64
from daytona import ExecutionArtifacts
def process_data_analysis_result(result: ExecutionArtifacts):
# Print the standard output from code execution
print("Result stdout", result.stdout)
result_idx = 0
for chart in result.charts:
if chart.png:
# Charts are returned in base64 format
# Decode and save them as PNG files
with open(f'chart-{result_idx}.png', 'wb') as f:
f.write(base64.b64decode(chart.png))
print(f'Chart saved to chart-{result_idx}.png')
result_idx += 1
```
This handler processes execution artifacts by:
- Logging stdout output from the executed code
- Extracting chart data from the artifacts
- Decoding base64-encoded PNG charts
- Saving them to local files
### 6. Configure the Data Analysis Tool
Now we'll initialize the `DaytonaDataAnalysisTool` and upload our dataset.
```python
from langchain_daytona_data_analysis import DaytonaDataAnalysisTool
# Initialize the tool with our result handler
DataAnalysisTool = DaytonaDataAnalysisTool(
on_result=process_data_analysis_result
)
# Upload the dataset with metadata describing its structure
with open("./dataset.csv", "rb") as f:
DataAnalysisTool.upload_file(
f,
description=(
"This is a CSV file containing vehicle valuations. "
"Relevant columns:\n"
"- 'year': integer, the manufacturing year of the vehicle\n"
"- 'price_in_euro': float, the listed price of the vehicle in Euros\n"
"Drop rows where 'year' or 'price_in_euro' is missing, non-numeric, or an outlier."
)
)
```
**Key points:**
- The `on_result` parameter connects our custom result handler
- The `description` provides context about the dataset structure to the agent
- Column descriptions help the agent understand how to process the data
- Data cleaning instructions ensure quality analysis
### 7. Create and Run the Agent
Finally, we'll create the LangChain agent with our configured model and tool, then invoke it with our analysis request.
```python
from langchain.agents import create_agent
# Create the agent with the model and data analysis tool
agent = create_agent(model, tools=[DataAnalysisTool], debug=True)
# Invoke the agent with our analysis request
agent_response = agent.invoke({
"messages": [{
"role": "user",
"content": "Analyze how vehicles price varies by manufacturing year. Create a line chart showing average price per year."
}]
})
# Always close the tool to clean up sandbox resources
DataAnalysisTool.close()
```
**What happens here:**
1. The agent receives your natural language request
2. It determines it needs to use the `DaytonaDataAnalysisTool`
3. Agent generates Python code to analyze the data
4. Code executes securely in the Daytona sandbox
5. Results are processed by our handler function
6. Charts are saved to your local directory
7. Sandbox resources are cleaned up at the end
### 8. Running Your Analysis
Now you can run the complete code to see the results.
```bash
python data_analysis.py
```
#### Understanding the Agent's Execution Flow
When you run the code, the agent works through your request step by step. Here's what happens in the background:
**Step 1: Agent receives and interprets the request**
The agent acknowledges your analysis request:
```
AI Message: "I'll analyze how vehicle prices vary by manufacturing year and create a line chart showing the average price per year."
```
**Step 2: Agent generates Python code**
The agent generates Python code to explore the dataset first:
```python
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# Load the dataset
df = pd.read_csv('/home/daytona/dataset.csv')
# Display basic info about the dataset
print("Dataset shape:", df.shape)
print("\nFirst few rows:")
print(df.head())
print("\nColumn names:")
print(df.columns.tolist())
print("\nData types:")
print(df.dtypes)
```
**Step 3: Code executes in Daytona sandbox**
The tool runs this code in a secure sandbox and returns the output:
```
Result stdout Dataset shape: (100000, 15)
First few rows:
Unnamed: 0 ... offer_description
0 75721 ... ST-Line Hybrid Adapt.LED+Head-Up-Display Klima
1 80184 ... blue Trend,Viele Extras,Top-Zustand
2 19864 ... 35 e-tron S line/Matrix/Pano/ACC/SONOS/LM 21
3 76699 ... 2.0 Lifestyle Plus Automatik Navi FAP
4 92991 ... 1.6 T 48V 2WD Spirit LED, WR
[5 rows x 15 columns]
Column names:
['Unnamed: 0', 'brand', 'model', 'color', 'registration_date', 'year',
'price_in_euro', 'power_kw', 'power_ps', 'transmission_type', 'fuel_type',
'fuel_consumption_l_100km', 'fuel_consumption_g_km', 'mileage_in_km',
'offer_description']
Data types:
Unnamed: 0 int64
brand object
model object
color object
registration_date object
year object
price_in_euro object
power_kw object
power_ps object
transmission_type object
fuel_type object
fuel_consumption_l_100km object
fuel_consumption_g_km object
mileage_in_km float64
offer_description object
dtype: object
```
**Step 4: Agent generates detailed analysis code**
Based on the initial dataset information, the agent generates more specific code to examine the key columns:
```python
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# Load the dataset
df = pd.read_csv('/home/daytona/dataset.csv')
print("Dataset shape:", df.shape)
print("\nColumn names:")
print(df.columns.tolist())
# Check for year and price_in_euro columns
print("\nChecking 'year' column:")
print(df['year'].describe())
print("\nMissing values in 'year':", df['year'].isna().sum())
print("\nChecking 'price_in_euro' column:")
print(df['price_in_euro'].describe())
print("\nMissing values in 'price_in_euro':", df['price_in_euro'].isna().sum())
```
**Step 5: Execution results from sandbox**
The code executes and returns column statistics:
```
Result stdout Dataset shape: (100000, 15)
Column names:
['Unnamed: 0', 'brand', 'model', 'color', 'registration_date', 'year',
'price_in_euro', 'power_kw', 'power_ps', 'transmission_type', 'fuel_type',
'fuel_consumption_l_100km', 'fuel_consumption_g_km', 'mileage_in_km',
'offer_description']
Checking 'year' column:
count 100000
unique 49
top 2019
freq 12056
Name: year, dtype: object
Missing values in 'year': 0
Checking 'price_in_euro' column:
count 100000
unique 11652
top 19990
freq 665
Name: price_in_euro, dtype: object
Missing values in 'price_in_euro': 0
```
**Step 6: Agent generates final analysis and visualization code**
Now that the agent understands the data structure, it generates the complete analysis code with data cleaning, processing, and visualization:
```python
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# Load the dataset
df = pd.read_csv('/home/daytona/dataset.csv')
print("Original dataset shape:", df.shape)
# Clean the data - remove rows with missing values in year or price_in_euro
df_clean = df.dropna(subset=['year', 'price_in_euro'])
print(f"After removing missing values: {df_clean.shape}")
# Convert to numeric and remove non-numeric values
df_clean['year'] = pd.to_numeric(df_clean['year'], errors='coerce')
df_clean['price_in_euro'] = pd.to_numeric(df_clean['price_in_euro'], errors='coerce')
# Remove rows where conversion failed
df_clean = df_clean.dropna(subset=['year', 'price_in_euro'])
print(f"After removing non-numeric values: {df_clean.shape}")
# Remove outliers using IQR method for both year and price
def remove_outliers(df, column):
Q1 = df[column].quantile(0.25)
Q3 = df[column].quantile(0.75)
IQR = Q3 - Q1
lower_bound = Q1 - 1.5 * IQR
upper_bound = Q3 + 1.5 * IQR
return df[(df[column] >= lower_bound) & (df[column] <= upper_bound)]
df_clean = remove_outliers(df_clean, 'year')
print(f"After removing year outliers: {df_clean.shape}")
df_clean = remove_outliers(df_clean, 'price_in_euro')
print(f"After removing price outliers: {df_clean.shape}")
print("\nCleaned data summary:")
print(df_clean[['year', 'price_in_euro']].describe())
# Calculate average price per year
avg_price_by_year = df_clean.groupby('year')['price_in_euro'].mean().sort_index()
print("\nAverage price by year:")
print(avg_price_by_year)
# Create line chart
plt.figure(figsize=(14, 7))
plt.plot(avg_price_by_year.index, avg_price_by_year.values, marker='o',
linewidth=2, markersize=6, color='#2E86AB')
plt.xlabel('Manufacturing Year', fontsize=12, fontweight='bold')
plt.ylabel('Average Price (€)', fontsize=12, fontweight='bold')
plt.title('Average Vehicle Price by Manufacturing Year', fontsize=14,
fontweight='bold', pad=20)
plt.grid(True, alpha=0.3, linestyle='--')
plt.xticks(rotation=45)
# Format y-axis to show currency
ax = plt.gca()
ax.yaxis.set_major_formatter(plt.FuncFormatter(lambda x, p: f'€{x:,.0f}'))
plt.tight_layout()
plt.show()
# Additional statistics
print(f"\nTotal number of vehicles analyzed: {len(df_clean)}")
print(f"Year range: {int(df_clean['year'].min())} - {int(df_clean['year'].max())}")
print(f"Price range: €{df_clean['price_in_euro'].min():.2f} - €{df_clean['price_in_euro'].max():.2f}")
print(f"Overall average price: €{df_clean['price_in_euro'].mean():.2f}")
```
This comprehensive code performs data cleaning, outlier removal, calculates averages by year, and creates a professional visualization.
**Step 7: Final execution and chart generation**
The code executes successfully in the sandbox, processes the data, and generates the visualization:
```
Result stdout Original dataset shape: (100000, 15)
After removing missing values: (100000, 15)
After removing non-numeric values: (99946, 15)
After removing year outliers: (96598, 15)
After removing price outliers: (90095, 15)
Cleaned data summary:
year price_in_euro
count 90095.000000 90095.000000
mean 2016.698563 22422.266707
std 4.457647 12964.727116
min 2005.000000 150.000000
25% 2014.000000 12980.000000
50% 2018.000000 19900.000000
75% 2020.000000 29500.000000
max 2023.000000 62090.000000
Average price by year:
year
2005.0 5968.124319
2006.0 6870.881523
2007.0 8015.234473
2008.0 8788.644495
2009.0 8406.198576
2010.0 10378.815972
2011.0 11540.640435
2012.0 13306.642261
2013.0 14512.707025
2014.0 15997.682899
2015.0 18563.864358
2016.0 20124.556294
2017.0 22268.083322
2018.0 24241.123673
2019.0 26757.469111
2020.0 29400.163494
2021.0 30720.168646
2022.0 33861.717552
2023.0 33119.840175
Name: price_in_euro, dtype: float64
Total number of vehicles analyzed: 90095
Year range: 2005 - 2023
Price range: €150.00 - €62090.00
Overall average price: €22422.27
Chart saved to chart-0.png
```
The agent successfully completed the analysis, showing that vehicle prices generally increased from 2005 (€5,968) to 2022 (€33,862), with a slight decrease in 2023. The result handler captured the generated chart and saved it as `chart-0.png`.
You should see the chart in your project directory that will look similar to this:
### 9. Complete Implementation
Here is the complete, ready-to-run example:
```python
import base64
from dotenv import load_dotenv
from langchain.agents import create_agent
from langchain_anthropic import ChatAnthropic
from daytona import ExecutionArtifacts
from langchain_daytona_data_analysis import DaytonaDataAnalysisTool
load_dotenv()
model = ChatAnthropic(
model_name="claude-sonnet-4-5-20250929",
temperature=0,
timeout=None,
max_retries=2,
stop=None
)
def process_data_analysis_result(result: ExecutionArtifacts):
# Print the standard output from code execution
print("Result stdout", result.stdout)
result_idx = 0
for chart in result.charts:
if chart.png:
# Save the png to a file
# The png is in base64 format.
with open(f'chart-{result_idx}.png', 'wb') as f:
f.write(base64.b64decode(chart.png))
print(f'Chart saved to chart-{result_idx}.png')
result_idx += 1
def main():
DataAnalysisTool = DaytonaDataAnalysisTool(
on_result=process_data_analysis_result
)
try:
with open("./dataset.csv", "rb") as f:
DataAnalysisTool.upload_file(
f,
description=(
"This is a CSV file containing vehicle valuations. "
"Relevant columns:\n"
"- 'year': integer, the manufacturing year of the vehicle\n"
"- 'price_in_euro': float, the listed price of the vehicle in Euros\n"
"Drop rows where 'year' or 'price_in_euro' is missing, non-numeric, or an outlier."
)
)
agent = create_agent(model, tools=[DataAnalysisTool], debug=True)
agent_response = agent.invoke(
{"messages": [{"role": "user", "content": "Analyze how vehicles price varies by manufacturing year. Create a line chart showing average price per year."}]}
)
finally:
DataAnalysisTool.close()
if __name__ == "__main__":
main()
```
**Key advantages of this approach:**
- **Secure execution:** Code runs in isolated Daytona sandbox
- **Automatic artifact capture:** Charts, tables, and outputs are automatically extracted
- **Natural language interface:** Describe analysis tasks in plain English
- **Framework integration:** Seamlessly works with LangChain's agent ecosystem
### 10. API Reference
The following public methods are available on `DaytonaDataAnalysisTool`:
#### download_file
```python
def download_file(remote_path: str) -> bytes
```
Downloads a file from the sandbox by its remote path.
**Arguments**:
- `remote_path` - str: Path to the file in the sandbox.
**Returns**:
- `bytes` - File contents.
**Example**:
```python
# Download a file from the sandbox
file_bytes = tool.download_file("/home/daytona/results.csv")
```
#### upload_file
```python
def upload_file(file: IO, description: str) -> SandboxUploadedFile
```
Uploads a file to the sandbox. The file is placed in `/home/daytona/`.
**Arguments**:
- `file` - IO: File-like object to upload.
- `description` - str: Description of the file, explaining its purpose and the type of data it contains.
**Returns**:
- [`SandboxUploadedFile`](#sandboxuploadedfile) - Metadata about the uploaded file.
**Example**:
Suppose you want to analyze sales data for a retail business. You have a CSV file named `sales_q3_2025.csv` containing columns like `transaction_id`, `date`, `product`, `quantity`, and `revenue`. You want to upload this file and provide a description that gives context for the analysis.
```python
with open("sales_q3_2025.csv", "rb") as f:
uploaded = tool.upload_file(
f,
"CSV file containing Q3 2025 retail sales transactions. Columns: transaction_id, date, product, quantity, revenue."
)
```
#### remove_uploaded_file
```python
def remove_uploaded_file(uploaded_file: SandboxUploadedFile) -> None
```
Removes a previously uploaded file from the sandbox.
**Arguments**:
- `uploaded_file` - [`SandboxUploadedFile`](#sandboxuploadedfile): The file to remove.
**Returns**:
- None
**Example**:
```python
# Remove an uploaded file
tool.remove_uploaded_file(uploaded)
```
#### get_sandbox
```python
def get_sandbox() -> Sandbox
```
Gets the current sandbox instance.
This method provides access to the Daytona sandbox instance, allowing you to inspect sandbox properties and metadata, as well as perform any sandbox-related operations. For details on available attributes and methods, see the [Sandbox](#sandbox) data structure section below.
**Arguments**:
- None
**Returns**:
- [`Sandbox`](#sandbox) - Sandbox instance.
**Example**:
```python
sandbox = tool.get_sandbox()
```
#### install_python_packages
```python
def install_python_packages(package_names: str | list[str]) -> None
```
Installs one or more Python packages in the sandbox using pip.
**Arguments**:
- `package_names` - str | list[str]: Name(s) of the package(s) to install.
**Returns**:
- None
:::note
The list of preinstalled packages in a sandbox can be found at [Daytona's Default Snapshot documentation](https://www.daytona.io/docs/en/snapshots.md#default-snapshots).
:::
**Example**:
```python
# Install a single package
tool.install_python_packages("pandas")
# Install multiple packages
tool.install_python_packages(["numpy", "matplotlib"])
```
#### close
```python
def close() -> None
```
Closes and deletes the sandbox environment.
**Arguments**:
- None
**Returns**:
- None
:::note
Call this method when you are finished with all data analysis tasks to properly clean up resources and avoid unnecessary usage.
:::
**Example**:
```python
# Close the sandbox and clean up
tool.close()
```
### 11. Data Structures
#### SandboxUploadedFile
Represents metadata about a file uploaded to the sandbox.
- `name`: `str` - Name of the uploaded file in the sandbox
- `remote_path`: `str` - Full path to the file in the sandbox
- `description`: `str` - Description provided during upload
#### Sandbox
Represents a Daytona sandbox instance.
See the full structure and API in the [Daytona Python SDK Sandbox documentation](https://www.daytona.io/docs/en/python-sdk/sync/sandbox.md#sandbox).
import { Image } from 'astro:assets'
import lettaCodeAgentResult from '../../../../../assets/docs/images/letta-code-agent-result.gif'
This guide demonstrates how to run an autonomous coding agent based on [Letta Code](https://docs.letta.com/letta-code/) inside a Daytona sandbox environment. The agent can develop web apps, write code in any language, install dependencies, and run scripts. Letta Code uses stateful agents with built-in memory, allowing conversations to persist across sessions.
---
### 1. Workflow Overview
When you launch the main script, a Daytona sandbox is created and Letta Code is installed inside it. The agent is configured with a custom Daytona-aware system prompt.
The script provides an interactive CLI interface where you can chat with the agent and issue commands:
```
$ npm run start
Creating sandbox...
Installing Letta Code...
Starting Letta Code...
Initializing agent...
Agent initialized. Press Ctrl+C at any time to exit.
User: create a beautiful, professional themed app that lets me write markdown documents and render them live
Thinking...
🔧 TodoWrite
🔧 Write /home/daytona/markdown-editor/index.html
🔧 TodoWrite
🔧 Start HTTP server on port 8080
🔧 TodoWrite
Perfect! I've created a beautiful markdown editor with live preview for you! 🎉
## Access your app here:
https://8080-c157e5cb-5e11-4bb6-883d-c873169223b8.proxy.daytona.works
## Features:
✨ **Live Preview** — Real-time markdown rendering
📝 **Full Markdown Support** — Headers, text styles, lists, code blocks, tables, links, images
💾 **Auto-Save** — Persists to browser localStorage
📥 **Export** — Download as `.md` or standalone `.html`
```
The agent can host web apps and provide you with a preview link using the [Daytona Preview Links](https://www.daytona.io/docs/en/preview-and-authentication.md) feature. When your task involves running or previewing a web application, the agent automatically hosts the app and generates a link for you to inspect the live result:
You can continue interacting with your agent until you are finished. When you exit the program, the sandbox will be deleted automatically.
### 2. Project Setup
#### Clone the Repository
First, clone the daytona [repository](https://github.com/daytonaio/daytona.git) and navigate to the example directory:
```bash
git clone https://github.com/daytonaio/daytona.git
cd daytona/guides/typescript/letta-code
```
#### Configure Environment
Get your Daytona API key from the [Daytona Dashboard](https://app.daytona.io/dashboard/keys) and your Letta API key from [Letta Platform](https://app.letta.com/api-keys).
Copy `.env.example` to `.env` and add your keys:
```bash
DAYTONA_API_KEY=your_daytona_key
SANDBOX_LETTA_API_KEY=your_letta_api_key
```
:::caution[API Key Security]
In this example, your Letta API key is passed into the sandbox environment and may be accessible to any code executed within it.
:::
#### Local Usage
:::note[Node.js Version]
Node.js 18 or newer is required to run this example. Please ensure your environment meets this requirement before proceeding.
:::
Install dependencies:
```bash
npm install
```
Run the example:
```bash
npm run start
```
The Letta Code agent will initialize and present an interactive prompt where you can issue commands.
### 3. Understanding the Agent's Architecture
This example consists of two main TypeScript files:
- **index.ts**: The main program that creates the Daytona sandbox, installs Letta Code, configures the system prompt, and provides an interactive CLI interface.
- **letta-session.ts**: A helper class that manages PTY-based bidirectional communication with Letta Code, handling JSON message streaming and response parsing.
#### Initialization
On initialization, the main program:
1. Creates a new [Daytona sandbox](https://www.daytona.io/docs/en/sandboxes.md) with your Letta API key included in the environment variables.
2. Installs Letta Code globally inside the sandbox by running `npm install` with [process execution](https://www.daytona.io/docs/en/process-code-execution.md#process-execution).
3. Creates a [PTY (pseudoterminal)](https://www.daytona.io/docs/en/pty.md) session in the sandbox for bidirectional communication with Letta Code.
4. Launches Letta Code in [bidirectional headless mode](https://docs.letta.com/letta-code/headless/) with stream-json format through the PTY.
5. Waits for user input and sends prompts to the agent through the PTY session.
#### Main Program Code
The program creates a [Daytona sandbox](https://www.daytona.io/docs/en/sandboxes.md) with the Letta API key passed as an environment variable:
```typescript
sandbox = await daytona.create({
envVars: { LETTA_API_KEY: process.env.SANDBOX_LETTA_API_KEY },
})
```
#### Running Letta Code in a Pseudoterminal
A [PTY (pseudoterminal)](https://www.daytona.io/docs/en/pty.md) is created for bidirectional communication with Letta Code:
```typescript
this.ptyHandle = await this.sandbox.process.createPty({
id: `letta-pty-${Date.now()}`,
onData: (data: Uint8Array) => this.handleData(data),
})
```
Letta Code is then launched in [bidirectional headless mode](https://docs.letta.com/letta-code/headless/#bidirectional-mode) through the PTY:
```typescript
await this.ptyHandle.sendInput(
`letta --new --system-custom "${systemPrompt.replace(/"/g, '\\"')}" --input-format stream-json --output-format stream-json --yolo -p\n`,
)
```
The `stream-json` setting is used for the input and output formats, enabling our program to send and receive JSON messages to and from the agent in real-time.
The `--system-custom` prompt allows us to pass a custom system prompt to the agent. Our prompt configures the agent with Daytona-specific instructions, including a URL pattern so the agent can generate preview links.
The `--yolo` flag allows the agent to run shell commands without requiring explicit user approval for each command.
#### Message Handling
To send prompts to the agent, the main script calls the `processPrompt()` method, which formats the user input as a JSON message and sends it to the PTY using `this.ptyHandle.sendInput()` as demonstrated above.
Formatted user messages look like this:
```json
{"type": "user", "message": {"role": "user", "content": "create a simple web server"}}
```
The agent responds with streaming JSON messages. Tool calls arrive as fragments:
```json
{"type": "message", "message_type": "approval_request_message", "tool_call": {"tool_call_id": "call_123", "name": "Bash", "arguments": "{\"command\": \"python3 -m http.server 8080\"}"}}
```
These JSON fragments are parsed by the `handleParsedMessage()` method. When multiple consecutive fragments are received for the same tool call, they are combined into a single tool call object. When a tool call or message is finished, the result is formatted and displayed to the user.
#### Clean up
When you exit the main program, the Daytona sandbox and all files are automatically deleted.
**Key advantages:**
- Secure, isolated execution in Daytona sandboxes
- Stateful agents with persistent memory across sessions
- Full Letta Code capabilities including file operations and shell commands
- Agents can be viewed in [Letta's Agent Development Environment](https://app.letta.com/)
- Automatic preview link generation for hosted services
- Multi-language and full-stack support
- Simple setup and automatic cleanup
This guide demonstrates how to configure the [Mastra coding agent](https://github.com/mastra-ai/template-coding-agent) to use Daytona sandboxes, enabling you to leverage AI capabilities for any coding-related task in a secure, isolated environment.
---
### 1. Workflow Overview
Once configured, you can use Mastra Studio to interact with the coding agent through a ChatGPT-like interface. This enables human-in-the-loop workflows where you can guide the agent, review its decisions, and iterate on solutions in real-time; all while the agent executes tasks securely within Daytona sandboxes.
### 2. Project Setup
:::note[Node.js Version]
Node.js version 20 or higher is required to run the coding agent. Please ensure your environment meets this requirement before proceeding.
:::
#### Clone the Repository
Clone the Mastra coding agent template repository, which includes the agent implementation and Daytona integration:
```bash
git clone https://github.com/mastra-ai/template-coding-agent.git
cd template-coding-agent
```
#### Configure Environment
Create a `.env` file in the project root directory:
```bash
touch .env
```
The `.env` file requires the following configuration:
- **LLM Provider**: The AI model provider for your coding agent
- **Model**: The specific model to use from your chosen provider
- **Sandbox Provider**: Daytona configuration for isolated execution
For this guide, we'll use OpenAI as the LLM provider with the `gpt-4o-mini` model:
```env
OPENAI_API_KEY=your_openai_key
MODEL=openai/gpt-4o-mini
```
Next, configure Daytona as your sandbox provider by adding your API key (available from the [Daytona Dashboard](https://app.daytona.io/dashboard/keys)):
```env
DAYTONA_API_KEY=your-daytona-api-key-here
```
#### Install Dependencies
Install the required packages using pnpm:
```bash
pnpm install
```
### 3. Running the Agent
Mastra Studio provides a ChatGPT-like interface for interacting with your coding agent. This mode offers:
- **Conversation History**: Previous conversations are stored and organized in threads
- **Visual Debugging**: Inspect agent execution steps, workflow, and tool calls
- **Model Switching**: Easily switch between different AI models
- **Tool Inspection**: View which tools your agent is using in real-time
For a complete overview of all features and capabilities, visit the [Mastra Studio documentation](https://mastra.ai/docs/getting-started/studio).
Start the dev server with:
```bash
pnpm run dev
```
If the dev server starts successfully, you'll see the terminal output displaying the URLs where you can access Mastra Studio:
```bash
│ Studio: http://localhost:4111
│ API: http://localhost:4111/api
```
Once started, open the provided URL in your browser to access the interactive interface. You can interact with your agent while monitoring its workflow in the terminal, where detailed logs show execution steps and low-level parameters.
Below is an example of terminal logs generated when the agent calls the `writeFile` tool to create a JavaScript file with a basic "Hello, world!" output:
```json
{
"text": "",
"toolCalls": [
{
"type": "tool-call",
"runId": "ab2a1d08-91c6-4028-9046-3446a721527f",
"from": "AGENT",
"payload": {
"toolCallId": "call_NiLLgBmgrYLSL0MsrG54E4A5",
"toolName": "writeFile",
"args": {
"sandboxId": "2152d23b-5742-47c2-9992-4414d4144869",
"path": "hello.js",
"content": "console.log('Hello, world!');"
},
"providerMetadata": {
"openai": {
"itemId": "fc_00bba3412cd22a2b0069399fbaeef881909b0583f359cbc33c"
}
}
}
}
],
"toolResults": [
{
"type": "tool-result",
"runId": "ab2a1d08-91c6-4028-9046-3446a721527f",
"from": "AGENT",
"payload": {
"args": {
"sandboxId": "2152d23b-5742-47c2-9992-4414d4144869",
"path": "hello.js",
"content": "console.log('Hello, world!');"
},
"toolCallId": "call_NiLLgBmgrYLSL0MsrG54E4A5",
"toolName": "writeFile",
"result": {
"success": true,
"path": "/home/daytona/hello.js"
},
"providerMetadata": {
"openai": {
"itemId": "fc_00bba3412cd22a2b0069399fbaeef881909b0583f359cbc33c"
}
}
}
}
],
"finishReason": "tool-calls",
"usage": {
"inputTokens": 4243,
"outputTokens": 53,
"totalTokens": 4296,
"reasoningTokens": 0,
"cachedInputTokens": 4096
},
"runId": "ab2a1d08-91c6-4028-9046-3446a721527f"
}
```
These logs show complete visibility into tool execution including arguments, results, token usage with caching metrics, and unique identifiers for debugging.
### 4. Example Usage
Let's walk through a practical example to see the Mastra coding agent and Daytona sandbox integration in action. We'll ask the agent to complete the following task:
> _"Create a Python file with a function that reverses a string, write a test file for it, and run the tests."_
#### Agent Workflow
The agent executes this task through a series of coordinated steps, utilizing different tools to interact with the Daytona sandbox:
##### 1. Create Sandbox
The agent first provisions a new Daytona sandbox for Python development:
**Tool:** `createSandbox`
**Arguments:**
```json
{
"name": "reverse_string_project",
"labels": null,
"language": "python",
"envVars": null
}
```
**Result:**
```json
{
"sandboxId": "bdfa9456-4945-43a7-97df-b9bfbcbdc62e"
}
```
##### 2. Write Project Files
Next, the agent creates both the implementation and test files in a single operation:
**Tool:** `writeFiles`
**Arguments:**
```json
{
"sandboxId": "bdfa9456-4945-43a7-97df-b9bfbcbdc62e",
"files": [
{
"path": "reverse_string.py",
"data": "def reverse_string(s):\n \"\"\"Return the reversed version of the input string s.\"\"\"\n return s[::-1]\n"
},
{
"path": "test_reverse_string.py",
"data": "import unittest\nfrom reverse_string import reverse_string\n\nclass TestReverseString(unittest.TestCase):\n def test_regular(self):\n self.assertEqual(reverse_string(\"hello\"), \"olleh\")\n def test_empty(self):\n self.assertEqual(reverse_string(\"\"), \"\")\n def test_single_char(self):\n self.assertEqual(reverse_string(\"a\"), \"a\")\n def test_numbers(self):\n self.assertEqual(reverse_string(\"12345\"), \"54321\")\n\nif __name__ == \"__main__\":\n unittest.main()\n"
}
]
}
```
**Result:**
```json
{
"success": true,
"filesWritten": [
"/home/daytona/reverse_string.py",
"/home/daytona/test_reverse_string.py"
]
}
```
##### 3. Execute Tests
Finally, the agent runs the test suite to verify the implementation:
**Tool:** `runCommand`
**Arguments:**
```json
{
"sandboxId": "bdfa9456-4945-43a7-97df-b9bfbcbdc62e",
"command": "python3 test_reverse_string.py",
"envs": null,
"workingDirectory": null,
"timeoutSeconds": 20,
"captureOutput": true
}
```
**Result:**
```json
{
"success": true,
"exitCode": 0,
"stdout": "....\n----------------------------------------------------------------------\nRan 4 tests in 0.000s\n\nOK\n",
"command": "python3 test_reverse_string.py",
"executionTime": 218
}
```
#### Agent Response
After successfully completing all steps, the agent provides a detailed summary:
```
The project is complete! Here's what was done:
✓ Created reverse_string.py with a function to reverse a string.
✓ Created test_reverse_string.py using Python's unittest to test the function with several cases.
✓ Ran the tests—all tests passed successfully.
Test Output:
....
----------------------------------------------------------------------
Ran 4 tests in 0.000s
OK
Would you like to see the code for either file or make any changes?
```
This example demonstrates how the agent autonomously handles the entire development workflow, from environment setup to test execution, all within the secure Daytona sandbox environment.
:::tip[Sandbox Management]
After the agent completes its task, you'll see the active sandbox in your [Daytona Dashboard](https://app.daytona.io/dashboard). It's recommended to clean up resources when finished, unless your use case requires the sandbox to remain active (e.g. for preview URLs or ongoing development).
:::
**Key Advantages**
- **Secure Isolation**: All agent operations run in isolated Daytona sandboxes, protecting your local environment
- **Multi-Language Support**: Execute code across different programming languages without local setup
- **Enhanced Debugging**: Use Mastra Studio to visualize and debug agent workflows in real-time
- **Scalable Execution**: Leverage Daytona's cloud infrastructure for resource-intensive tasks
Guides for using the [OpenAI Agents SDK](https://developers.openai.com/api/docs/guides/agents) with Daytona.
This guide walks through the core patterns for running AI agents in isolated cloud sandboxes using the [OpenAI Agents SDK](https://developers.openai.com/api/docs/guides/agents) and Daytona. We start from a simple example and progressively layer on multi-agent handoffs, memory, structured outputs, and human-in-the-loop workflows.
See also the [Text-to-SQL Agent with the OpenAI Agents SDK and Daytona](https://www.daytona.io/docs/en/guides/openai-agents/text-to-sql-agent-openai-agents-sdk.md) guide for a complete project built on these patterns.
---
## Prerequisites
Install the Agents SDK with the Daytona extra:
```shell
pip install openai-agents[daytona]
```
Set your environment variables:
```shell
export OPENAI_API_KEY=...
export DAYTONA_API_KEY=... # from https://app.daytona.io/dashboard/keys
```
## 1\. Give Your Agent a Shell
The basic pattern: declare a workspace, give an agent shell access, and let it explore, write code, and run it.
```py
from openai.types.responses import ResponseTextDeltaEvent
from agents import Runner
from agents.run import RunConfig
from agents.sandbox import Manifest, SandboxAgent, SandboxRunConfig
from agents.sandbox.capabilities import Shell
from agents.sandbox.entries import File
from agents.extensions.sandbox import DaytonaSandboxClient, DaytonaSandboxClientOptions
DAYTONA_ROOT = "/home/daytona/workspace"
# Declare workspace contents declaratively
# Use Daytona's home directory as root instead of the default /workspace.
manifest = Manifest(root=DAYTONA_ROOT, entries={
"data/sales.csv": File(content=b"quarter,revenue\nQ1,3200000\nQ2,3600000\nQ3,4200000\nQ4,3900000"),
"requirements.txt": File(content=b"pandas\nmatplotlib"),
})
agent = SandboxAgent(
name="Data Analyst",
model="gpt-5.4",
instructions=(
"You're a data analyst with shell access to a sandbox. "
"Inspect the workspace, install dependencies, write and run code to answer questions."
),
default_manifest=manifest,
capabilities=[Shell()],
)
client = DaytonaSandboxClient()
run_config = RunConfig(
sandbox=SandboxRunConfig(client=client, options=DaytonaSandboxClientOptions())
)
result = Runner.run_streamed(
agent,
"Which quarter had the highest revenue? Write a script to plot the trend and save it as chart.png.",
run_config=run_config,
)
async for event in result.stream_events():
if event.type == "raw_response_event" and isinstance(event.data, ResponseTextDeltaEvent):
print(event.data.delta, end="", flush=True)
elif event.type == "run_item_stream_event":
if event.name == "tool_called":
raw = event.item.raw_item
name = raw.get("name", "") if isinstance(raw, dict) else getattr(raw, "name", "")
args = raw.get("arguments", "") if isinstance(raw, dict) else getattr(raw, "arguments", "")
print(f"\n[{name}] {args}")
elif event.name == "tool_output":
print(f" → {event.item.output[:200]}")
await client.close()
```
The agent will likely `cat` the CSV, `pip install -r requirements.txt`, write a Python script, run it, and report back, all through the shell tool. A typical run might look like:
```
[exec_command] {"cmd": "cat data/sales.csv"}
→ quarter,revenue\nQ1,3200000\nQ2,3600000\nQ3,4200000\nQ4,3900000
[exec_command] {"cmd": "pip install -r requirements.txt"}
→ Successfully installed pandas matplotlib ...
[exec_command] {"cmd": "python plot.py"}
→ Chart saved to chart.png
Q3 had the highest revenue at $4.2M. I've saved a trend chart to chart.png.
```
**What's happening:**
- **`Manifest`** describes the workspace declaratively: files, directories, and environment variables (via `environment=Environment(value={"API_KEY": "..."})`, where `Environment` is imported from `agents.sandbox.manifest`). You can also pass `Manifest(entries={})` for an empty workspace and let the agent create everything from scratch.
- **`SandboxAgent`** adds `default_manifest` and `capabilities` on top of a regular `Agent`. You can still pass `tools=` (function tools) and `mcp_servers=` alongside capabilities.
- **`Shell`** gives the model an `exec_command` tool that can run `cat`, `ls`, `find`, `grep`, `pip install`, `python script.py`, etc. inside the sandbox. The agent can read *and* write: creating files, installing packages, and running programs are all fair game.
- **`DaytonaSandboxClient`** provisions a remote cloud sandbox.
- **`Runner.run_streamed`** streams text token-by-token and emits structured events when tools are called.
The sandbox is fully isolated, so there's no risk to your host machine. The agent has full Linux access inside it.
## 2\. Multi-Turn Conversations
The previous example runs a single question and exits. In practice you'll often want an interactive session where the human asks questions, the agent responds, and conversation history carries forward. The sandbox stays alive across turns so the agent can build on previous work.
```py
client = DaytonaSandboxClient()
session = await client.create(manifest=manifest, options=DaytonaSandboxClientOptions())
await session.start()
run_config = RunConfig(sandbox=SandboxRunConfig(session=session))
conversation = []
while True:
question = input("> ")
if question.strip().lower() == "exit":
break
input_items = conversation + [{"role": "user", "content": question}]
result = Runner.run_streamed(agent, input_items, run_config=run_config)
async for event in result.stream_events():
if event.type == "raw_response_event" and isinstance(event.data, ResponseTextDeltaEvent):
print(event.data.delta, end="", flush=True)
print()
# Carry conversation history forward so the agent remembers previous turns
conversation = result.to_input_list()
await session.aclose()
await client.close()
```
This example uses `result.to_input_list()`, which serializes the full conversation (including tool calls and their results) into a format you can pass back on the next turn. The agent sees the entire history, so follow-ups like "break that down by quarter" or "now plot it" just work. The SDK also supports other state strategies: sessions, `conversation_id`, and `previous_response_id` – see the [State and conversation management](https://openai.github.io/openai-agents-python/running_agents/#state-and-conversation-management) docs for the full picture.
This pattern composes with everything else in this guide. You can add handoffs, memory, pause/resume, etc. on top of a multi-turn loop.
## 3\. Pause and Resume
By default, when a session shuts down the sandbox is deleted. Setting `pause_on_exit=True` changes this: on shutdown, the SDK calls Daytona's pause API (`sandbox.stop()`) instead of `sandbox.delete()`. The sandbox stays on Daytona's infrastructure in a paused state, preserving the filesystem (including any installed packages).
To reconnect on the next run, you need two things:
1. **Daytona keeps the sandbox alive**, paused on their side, identifiable by its sandbox ID.
2. **Your code remembers the sandbox ID**. The SDK captures this in `DaytonaSandboxSessionState`, a Pydantic model you serialize to disk.
When you call `client.resume(saved_state)`, the SDK uses the `sandbox_id` from that state to call `daytona.get(sandbox_id)`. If the sandbox is still there, it calls `sandbox.start()` to wake it. The workspace is already populated, so it skips full manifest apply but still reapplies ephemeral state (like environment variables) and restores snapshots if needed. If the sandbox has expired or been deleted, `resume()` falls through and creates a fresh one from the same config.
```py
from pathlib import Path
from agents.extensions.sandbox import (
DaytonaSandboxClient,
DaytonaSandboxClientOptions,
DaytonaSandboxSessionState,
)
STATE_FILE = Path(".session_state.json")
client = DaytonaSandboxClient()
options = DaytonaSandboxClientOptions(pause_on_exit=True)
# Try to resume a previously paused sandbox
session = None
if STATE_FILE.exists():
saved = DaytonaSandboxSessionState.model_validate_json(STATE_FILE.read_text())
old_sandbox_id = saved.sandbox_id # snapshot before resume() mutates it
try:
session = await client.resume(saved)
if session.state.sandbox_id == old_sandbox_id:
print("Reconnected to existing sandbox.")
else:
print("Previous sandbox expired. Created a new one.")
except Exception:
session = None # fall through to fresh creation
if session is None:
session = await client.create(manifest=manifest, options=options)
# Save state immediately so crashes don't orphan the sandbox
STATE_FILE.write_text(session.state.model_dump_json(indent=2))
# ... run your agent ...
# On clean exit: aclose() persists the workspace, then pauses (or deletes) the remote sandbox
await session.aclose()
await client.close()
```
The Agents SDK also has its own **workspace persistence** mechanism (`persist_workspace`/`hydrate_workspace`) that tars up workspace files and saves them externally (local disk, S3). This is useful when the sandbox itself is gone and you need to restore contents into a new one. It's distinct from **Daytona snapshots** (`sandbox_snapshot_name`), which are pre-built sandbox templates you create sandboxes *from*.
## 4\. Handoffs: Routing Work Between Agents
A `SandboxAgent` can hand off to a regular `Agent` and vice versa. Not every agent needs sandbox access: a copywriter can draft an email without a shell.
```py
from agents import Agent, Runner
from agents.run import RunConfig
from agents.sandbox import Manifest, SandboxAgent, SandboxRunConfig
from agents.sandbox.capabilities import Shell
from agents.sandbox.entries import File
from agents.extensions.sandbox import DaytonaSandboxClient, DaytonaSandboxClientOptions
manifest = Manifest(root="/home/daytona/workspace", entries={
"data/sales.csv": File(content=b"quarter,region,revenue\nQ1,NA,3200000\nQ1,EU,2100000\n..."),
})
# The copywriter receives the analyst's findings (no sandbox needed)
copywriter = Agent(
name="Client Email Drafter",
model="gpt-5.4",
instructions="Turn the analyst's findings into a short, friendly client-facing email.",
)
# The analyst has shell access to crunch data, then hands off to the copywriter
analyst = SandboxAgent(
name="Data Analyst",
model="gpt-5.4",
instructions=(
"Analyze the sales data in the workspace. Write and run code to compute trends. "
"Then hand off your findings to the Client Email Drafter."
),
default_manifest=manifest,
capabilities=[Shell()],
handoffs=[copywriter],
)
client = DaytonaSandboxClient()
result = await Runner.run(
analyst,
"Summarize Q1 performance by region for the client.",
run_config=RunConfig(sandbox=SandboxRunConfig(client=client, options=DaytonaSandboxClientOptions())),
)
await client.close()
print(result.final_output) # a polished email, written by the copywriter
```
**The flow:** Analyst (sandbox, reads CSV, runs a script) → Copywriter (no sandbox, writes the email). The final output comes from the copywriter, but it's grounded in the analyst's computed results.
Handoffs can also be **circular**: agents pass control back and forth until one decides to respond directly instead of handing off, which ends the run. In the example above, that would look like:
```py
from agents import handoff
copywriter.handoffs = [handoff(analyst)]
analyst.handoffs = [handoff(copywriter)]
```
You can also have multiple sandbox agents, each with their own isolated workspace and separate `RunConfig`, as shown in the next section.
## 5\. Sandbox Agents as Tools
Instead of handoffs (sequential), you can run sandbox agents as parallel tools under an orchestrator:
```py
import json
from pydantic import BaseModel
class PricingReview(BaseModel):
risk: str
summary: str
class RolloutReview(BaseModel):
risk: str
blockers: list[str]
# By default, Pydantic output_type results are stringified (repr) when passed back
# as tool output. This extractor ensures the orchestrator receives clean JSON instead.
async def structured_output_extractor(result) -> str:
final_output = result.final_output
if isinstance(final_output, BaseModel):
return json.dumps(final_output.model_dump(mode="json"), sort_keys=True)
return str(final_output)
# Each reviewer gets its own isolated workspace
pricing_agent = SandboxAgent(
name="Pricing Reviewer",
default_manifest=pricing_docs_manifest,
capabilities=[Shell()],
output_type=PricingReview,
...
)
rollout_agent = SandboxAgent(
name="Rollout Reviewer",
default_manifest=rollout_docs_manifest,
capabilities=[Shell()],
output_type=RolloutReview,
...
)
# Orchestrator calls them like tools, each in its own sandbox
client = DaytonaSandboxClient()
orchestrator = Agent(
name="Deal Desk Coordinator",
instructions="Use both review tools, then synthesize a recommendation.",
tools=[
pricing_agent.as_tool(
tool_name="review_pricing",
tool_description="Review the pricing packet.",
custom_output_extractor=structured_output_extractor,
run_config=RunConfig(sandbox=SandboxRunConfig(client=client, options=DaytonaSandboxClientOptions())),
),
rollout_agent.as_tool(
tool_name="review_rollout",
tool_description="Review the rollout plan.",
custom_output_extractor=structured_output_extractor,
run_config=RunConfig(sandbox=SandboxRunConfig(client=client, options=DaytonaSandboxClientOptions())),
),
],
)
result = await Runner.run(orchestrator, "Review the Acme Corp renewal deal.")
print(result.final_output)
await client.close()
```
Each sandbox agent runs in its own isolated environment. The orchestrator never sees the files; it only gets the structured output as JSON via the `custom_output_extractor`. This is great for **fan-out** patterns where you need multiple independent analyses.
## 6\. Memory Across Sessions
The `Memory` capability lets an agent learn from previous runs. It extracts durable facts and preferences from each conversation, consolidates them into structured files in the workspace, and automatically injects a summary into the agent's instructions on future runs.
```py
from agents.sandbox import LocalSnapshotSpec, SandboxRunConfig
from agents.sandbox.capabilities import ApplyPatch, Memory, Shell
agent = SandboxAgent(
name="Data Analyst",
model="gpt-5.4",
instructions="Analyze the workspace and answer questions.",
default_manifest=manifest,
capabilities=[
Shell(),
ApplyPatch(),
Memory(),
],
)
snapshot = LocalSnapshotSpec(base_path=Path("/tmp/my-agent-snapshots"))
# First run: agent learns user preferences.
# Memory artifacts are written to the workspace when the session closes.
session = await client.create(manifest=manifest, snapshot=snapshot)
async with session:
run_config = RunConfig(sandbox=SandboxRunConfig(session=session))
result1 = await Runner.run(agent, "Fix the bug. I prefer minimal patches.", run_config=run_config)
# Second run: resume the workspace so the agent sees the memory files from run 1.
resumed = await client.resume(session.state)
async with resumed:
run_config = RunConfig(sandbox=SandboxRunConfig(session=resumed))
result2 = await Runner.run(agent, "Add a test for the fix.", run_config=run_config)
```
Memory consolidation runs as a background task and flushes when the session closes, so the close/resume cycle ensures run 2 sees the artifacts from run 1\. You can also keep a single sandbox session open across runs (like section 2), though memory visibility then depends on whether the background task has finished.
`Memory()` with no arguments enables both reading and writing with live updates (the agent can repair stale memory in place). It requires `Shell` and `ApplyPatch` as sibling capabilities. You can tune the behavior:
```py
from agents.sandbox.config import MemoryReadConfig, MemoryWriteConfig
# Write-only (no auto-injection of memory into instructions):
Memory(read=None)
# Read-only (no background memory generation):
Memory(write=None)
# Custom write settings:
Memory(write=MemoryWriteConfig(
batch_size=2,
extra_prompt="Pay attention to which SQL patterns work best for this dataset.",
))
# Disable live updates (agent reads memory but won't repair stale entries):
Memory(read=MemoryReadConfig(live_update=False))
```
**How it works under the hood:**
After each `Runner.run()` completes, the SDK serializes the run (user input, tool calls, outputs, and final response, filtering out system/developer items and reasoning) into a JSONL file in `rollouts/`. A background pipeline then processes these in two phases:
1. **Phase 1 (per-rollout extraction):** A lightweight model (`gpt-5.4-mini`) reads each rollout transcript and extracts durable facts and preferences into `memory/raw_memories/` and `memory/rollout_summaries/`.
2. **Phase 2 (consolidation):** Once enough phase-1 results accumulate (controlled by `batch_size`), a stronger model (`gpt-5.4`) consolidates everything into `memory/MEMORY.md` (a structured, grep-friendly handbook) and `memory/memory_summary.md` (a compact index). A final phase-2 pass always runs on session shutdown.
Both phases run in a background `asyncio.Task`, so they don't block the agent's main work.
On subsequent runs, the `Memory` capability reads `memory/memory_summary.md` from the workspace and injects it into the agent's instructions (truncated to 15k tokens). The agent also gets guidance on when to grep `memory/MEMORY.md` for deeper context. This injection happens automatically — you don't need to wire it up yourself.
The full set of generated artifacts:
- `rollouts/`: JSONL rollout files (raw transcripts of each run)
- `memory/MEMORY.md`: detailed, grep-friendly handbook
- `memory/memory_summary.md`: compact summary, auto-injected into instructions
- `memory/raw_memories/`: individual learned facts (one file per rollout)
- `memory/raw_memories.md`: concatenated version of the above, fed into phase 2
- `memory/rollout_summaries/`: per-rollout summaries
- `memory/skills/`: optional reusable procedures the consolidation model may create
If you combine this with pause/resume (\#3), the memory files survive across sessions. The workspace persistence model includes all runtime-created files by default (only `ephemeral=True` manifest entries are excluded). So on the next run, the agent starts with full context from previous sessions — no extra wiring needed.
## 7\. Custom Capabilities
Capabilities are plugins that inject tools and instructions into a sandbox agent. The built-in ones (`Shell`, `ApplyPatch`, `Vision`) cover common cases, but you can write your own:
```py
from agents.sandbox.capabilities.capability import Capability
from agents.tool import Tool, function_tool
class ExposePort(Capability):
type: str = "expose_port"
def tools(self) -> list[Tool]:
session = self.session # bound automatically by the framework
@function_tool
async def get_app_url(port: int) -> str:
"""Get the public URL for a port running in this sandbox."""
endpoint = await session.resolve_exposed_port(port)
return endpoint.url_for("http")
return [get_app_url]
```
**Note:** `resolve_exposed_port` requires the port to be predeclared in the client options, e.g. `DaytonaSandboxClientOptions(exposed_ports=(8080,))`. Without this, the call raises `ExposedPortUnavailableError`.
Use this to expose domain-specific operations (database queries, API testing, cloud storage access) as tools the agent can call.
## Quick Reference: DaytonaSandboxClientOptions
| Option | Default | Description |
| :---- | :---- | :---- |
| `image` | `None` | OCI-compliant image to boot from |
| `env_vars` | `None` | Environment variables injected at creation |
| `exposed_ports` | `()` | Ports accessible via signed preview URLs |
| `pause_on_exit` | `False` | Pause sandbox instead of deleting on cleanup |
| `auto_stop_interval` | `0` | Seconds of inactivity before auto-pause (0 \= disabled) |
| `create_timeout` | `60` | Timeout in seconds for sandbox creation |
| `resources` | `None` | CPU/memory/disk configuration |
## Patterns at a Glance
| Pattern | When to Use | Key Concept |
| :---- | :---- | :---- |
| **Give Your Agent a Shell** (\#1) | Agent needs to read, write, or run code | `Manifest` \+ `Shell` |
| **Multi-Turn Conversations** (\#2) | Interactive sessions with a human | `result.to_input_list()` |
| **Pause/Resume** (\#3) | Long-running or iterative tasks | `pause_on_exit` \+ `client.resume(state)` |
| **Handoffs** (\#4) | Pipeline: analyze → write → review | `handoffs=[next_agent]` |
| **Agents as Tools** (\#5) | Parallel independent analyses | `agent.as_tool(run_config=...)` |
| **Memory** (\#6) | Preferences that persist across sessions | `Memory()` \+ `MemoryReadConfig`/`MemoryWriteConfig` |
| **Custom Capabilities** (\#7) | Domain-specific sandbox operations | Subclass `Capability` |
## What's Next
For a complete project that puts these patterns to work, see [**Building a Text-to-SQL Agent with OpenAI Agents SDK and Daytona**](https://www.daytona.io/docs/en/guides/openai-agents/text-to-sql-agent-openai-agents-sdk.md), a conversational agent that queries real federal spending data, combining multi-turn conversations, pause/resume, memory, and preview URLs.
In this guide we use the [OpenAI Agents SDK](https://developers.openai.com/api/docs/guides/agents) and Daytona sandboxes to build a conversational agent that answers natural-language questions about NASA's federal spending. It translates questions into SQL, runs them against a SQLite database of real USAspending.gov data (FY2021-FY2025), and explains the results. The agent runs inside a Daytona sandbox, which provides isolated code execution, pause/resume across runs (so you can reconnect to the same sandbox without re-downloading data), cross-session memory (so the agent learns from previous conversations), and signed preview URLs for downloading query results.
---
### What a session looks like
```
> How much did NASA spend in FY2024?
[SQL] (limit 10)
SELECT SUM(federal_action_obligation) AS total_obligations
FROM spending
WHERE fiscal_year = 2024;
→ Result (1 rows)
| total_obligations |
|-------------------|
| 21352116106.41 |
↓ https://8080-your-sandbox-id.proxy.daytona.works/query_1775081861_281.csv
NASA obligated **$21.35 billion** in **FY2024**.
> Who are the top 5 recipients?
[SQL] (limit 10)
SELECT COALESCE(NULLIF(recipient_parent_name, ''), NULLIF(recipient_name, ''), 'UNKNOWN') AS recipient_entity,
SUM(federal_action_obligation) AS total_obligations
FROM spending
WHERE fiscal_year = 2024
GROUP BY recipient_entity
ORDER BY total_obligations DESC
LIMIT 5;
→ Result (5 rows)
| recipient_entity | total_obligations |
|--------------------------------------|-------------------|
| CALIFORNIA INSTITUTE OF TECHNOLOGY | 2247310017.56 |
| SPACE EXPLORATION TECHNOLOGIES CORP. | 1996992060.56 |
| THE BOEING COMPANY | 1540996029.73 |
| LOCKHEED MARTIN CORP | 1208072461.33 |
| NORTHROP GRUMMAN CORPORATION | 641375071.34 |
↓ https://8080-your-sandbox-id.proxy.daytona.works/query_1775081919_307.csv
In FY2024, the top 5 recipients were **California Institute of Technology ($2.25B)**,
**SpaceX ($2.00B)**, **Boeing ($1.54B)**, **Lockheed Martin ($1.21B)**,
and **Northrop Grumman ($641.38M)**.
```
The agent keeps conversation context across turns, so one can naturally follow-up on previous questions. Each result also gets a download link via the Daytona sandbox's exposed port.
## 1\. Setup
Set your environment variables:
```shell
export OPENAI_API_KEY=...
export DAYTONA_API_KEY=... # from https://app.daytona.io/dashboard/keys
```
Clone the repo, install dependencies and run:
```shell
git clone https://github.com/openai/openai-agents-python
cd openai-agents-python
uv sync --extra daytona
uv run python -m examples.sandbox.extensions.daytona.usaspending_text2sql.agent
```
On the first run, the agent fetches NASA spending data from the USAspending.gov API and builds a SQLite database inside the sandbox. This takes a few minutes. Subsequent runs reuse the paused sandbox and skip this step entirely.
## 2\. The Workspace
The manifest declares what goes into the sandbox:
```py
from agents.sandbox import Manifest
from agents.sandbox.entries import Dir, LocalDir, LocalFile
manifest = Manifest(
root=WORKSPACE_ROOT,
entries={
"setup_db.py": LocalFile(src=SETUP_DB_PATH),
"schema": LocalDir(src=SCHEMA_DIR),
"data": Dir(ephemeral=True),
},
)
```
The main entries:
- **`setup_db.py`** fetches NASA spending data from the USAspending.gov bulk download API and builds a SQLite database. It downloads contracts, grants, and IDVs for FY2021-FY2025, parses the CSVs, and creates an indexed `spending` table with \~30 columns covering amounts, recipients, locations, industry codes, and more. It's idempotent: if the DB already exists, it skips the download.
- **`schema/`** contains documentation the agent uses to understand the data:
- `overview.md` has the table schema, column descriptions, and example SQL patterns. This gets injected directly into the agent's `developer_instructions`.
- `tables/spending.md` has detailed per-column docs the agent can read via shell if needed.
- `glossary.md` has official USAspending terminology (what "obligation" vs "outlay" means, etc.), fetched from the USAspending API during setup.
- **`data/`** is where the SQLite DB lands. It's marked `ephemeral=True` so it's excluded from workspace snapshots (the DB can always be rebuilt from the API).
## 3\. The SqlCapability
The core of this example is a custom `Capability` that gives the agent a guardrailed `run_sql` tool. The agent could run `sqlite3` directly via the shell, but that would give it unrestricted access. The capability enforces safety at multiple levels:
```py
class SqlCapability(Capability):
type: Literal["sql"] = "sql"
db_path: str = "data/usaspending.db"
max_display_rows: int = 100
max_csv_rows: int = 10_000
timeout_seconds: float = 30.0
results_dir: str = "results"
```
**Guardrails (defense in depth):**
- **Read-only access**: SQLite opened with `?mode=ro` URI \+ `PRAGMA query_only = ON`
- **Statement validation**: Only `SELECT`, `WITH`, `EXPLAIN`, `PRAGMA` are allowed
- **Row limits**: 100 rows displayed to the model, up to 10,000 saved as downloadable CSV
- **Timeouts**: Queries killed after 30 seconds
The tool returns structured JSON (columns, rows, row counts, CSV filename), and the capability injects instructions telling the agent to prefer aggregations, use the schema docs, and explain its query logic.
## 4\. The Agent
The pieces come together in a `SandboxAgent` with four capabilities:
```py
from agents.sandbox import SandboxAgent
from agents.sandbox.capabilities.compaction import Compaction
from agents.sandbox.capabilities.memory import Memory
from agents.sandbox.capabilities.shell import Shell
from agents.sandbox.config import MemoryReadConfig, MemoryWriteConfig
agent = SandboxAgent(
name="NASA Spending Q&A",
default_manifest=manifest,
model="gpt-5.4",
instructions=(
"You are a helpful data analyst that answers questions about NASA federal spending "
"by writing and executing SQL queries."
),
developer_instructions=DEVELOPER_INSTRUCTIONS,
capabilities=[
SqlCapability(db_path="data/usaspending.db"),
Shell(),
Compaction(),
Memory(
read=MemoryReadConfig(live_update=False),
write=MemoryWriteConfig(batch_size=5),
),
],
)
```
- **`SqlCapability`** provides the guardrailed `run_sql` tool (section 3).
- **`Shell()`** gives the agent general shell access, mainly so it can `cat schema/tables/spending.md` or `cat schema/glossary.md` when it needs column details or term definitions beyond what's in the overview.
- **`Compaction()`** handles long sessions. After the conversation grows past a token threshold, the SDK automatically compacts earlier turns to keep context manageable. Without this, a long Q\&A session would eventually hit the model's context limit.
- **`Memory()`** gives the agent cross-session recall. The `Memory` capability is covered in detail in section 8\. The key config choices here: `live_update=False` means the agent isn't instructed to edit memory files mid-conversation (and doesn't need `ApplyPatch`), and `batch_size=5` batches the expensive consolidation step to run every 5 turns instead of every turn.
The `developer_instructions` load the full schema overview from `schema/overview.md` plus guidelines about data caveats (obligations vs outlays, masked recipients, etc.), so the agent starts every conversation already knowing the schema and quirks of the data.
## 5\. The Conversation Loop
Each turn appends the user's question to the conversation history, streams the agent's response (printing text deltas, tool calls, and formatted results), then carries the full history forward via `result.to_input_list()`. This is what lets follow-ups like "break that down by year" work, because the history includes tool calls and their results, so the agent knows what "that" refers to.
```py
input_items = conversation + [{"role": "user", "content": question}]
result = Runner.run_streamed(agent, input_items, run_config=run_config)
async for event in result.stream_events():
# print text deltas, tool calls, and tool outputs as they arrive
...
conversation = result.to_input_list()
```
The actual example includes SQL syntax highlighting and table formatting for the terminal output. See `agent.py` for the full streaming loop.
## 6\. Pause and Resume
The first run is slow because `setup_db.py` needs to download data from the USAspending API. With `pause_on_exit=True`, the sandbox pauses instead of being deleted, so subsequent runs reconnect to the existing sandbox with the database already built.
```py
# Core pattern (see agent.py for full error handling):
client = DaytonaSandboxClient()
options = DaytonaSandboxClientOptions(pause_on_exit=True, exposed_ports=(8080,))
saved_state = _load_session_state() # returns None on first run
if saved_state is not None:
session = await client.resume(saved_state)
else:
session = await client.create(manifest=agent.default_manifest, options=options)
_save_session_state(session.state)
```
The session state is serialized to a JSON file. On the next run, `client.resume()` uses the saved `sandbox_id` to find and wake the paused sandbox. If it's expired or been deleted, it falls through and creates a fresh one. Type `exit` to pause the sandbox, or `destroy` to delete it.
## 7\. Downloadable Results via Exposed Ports
Each query result is saved as a CSV inside the sandbox. To make these downloadable, a simple HTTP file server runs on port 8080:
```py
# Start a file server inside the sandbox
await session.exec("mkdir -p results", timeout=5.0)
await session.exec(
"nohup python3 -m http.server 8080 --directory results > /dev/null 2>&1 &",
timeout=5.0,
)
# Get the signed public URL
endpoint = await session.resolve_exposed_port(8080)
downloads_url = endpoint.url_for("http")
# -> https://8080-abc123.proxy.daytona.works
```
The `run_sql` tool includes the CSV filename in its response, and the agent surfaces the download link. You can see this in the session example at the top: each result has a `↓` download URL (signed Daytona preview link).
## 8\. Memory Across Sessions
Within a single REPL session, the agent has the full conversation history and remembers every question and answer. But when you type `exit` and come back later, that history is gone. The `Memory` capability bridges this gap by extracting durable learnings from each session and making them available to future sessions.
**How it works:**
After each conversational turn, the SDK serializes the full exchange (input, tool calls, outputs, response) into a rollout file. A background pipeline then processes these:
1. **Phase 1** (after every turn): A lightweight model (`gpt-5.4-mini`) reads the rollout and extracts durable facts (useful query patterns, data caveats, column quirks, user preferences) into `memory/raw_memories/` and `memory/rollout_summaries/`.
2. **Phase 2** (every 5 turns, or on shutdown): A stronger model (`gpt-5.4`) consolidates everything into `memory/MEMORY.md` (a detailed, grep-friendly handbook) and `memory/memory_summary.md` (a compact summary).
Both phases run in background `asyncio` tasks, so they don't block the conversation.
**How it persists:**
Memory files live in the sandbox filesystem and survive as long as the paused sandbox does. When you reconnect, the `Memory` capability reads them back. If the sandbox is destroyed or can't be reconnected, memory is lost and the agent starts fresh.
## Quick Reference
| Component | What it does |
| :---- | :---- |
| `setup_db.py` | Fetches NASA data from USAspending API, builds SQLite DB |
| `schema/overview.md` | Table schema and example queries, injected into agent instructions |
| `schema/glossary.md` | Official USAspending term definitions |
| `SqlCapability` | Guardrailed `run_sql` tool (read-only, row limits, timeouts) |
| `Shell()` | General shell access for reading schema docs and memory |
| `Compaction()` | Automatic context compression for long sessions |
| `Memory()` | Cross-session learning: extracts and recalls durable facts |
| `pause_on_exit=True` | Sandbox persists across runs (avoids re-downloading data) |
Guides for running OpenClaw with Daytona.
This guide walks you through setting up [OpenClaw](https://openclaw.ai/) inside a Daytona sandbox and configuring Telegram and WhatsApp channels.
Running OpenClaw in a Daytona sandbox keeps your AI assistant isolated from your local machine, provides a secure environment for code execution, and ensures your bot stays online 24/7 without tying up your personal computer.
### Prerequisites
- Daytona account and API key (Get it from [Daytona Dashboard](https://app.daytona.io/dashboard/keys))
- Local terminal (macOS, Linux, or Windows)
### Install the Daytona CLI
```bash
brew install daytonaio/cli/daytona
```
```bash
powershell -Command "irm https://get.daytona.io/windows | iex"
```
:::note
Already have the CLI? Check your version with `daytona --version`. If it's below **0.135.0**, [upgrade to the latest version](https://www.daytona.io/docs/en/getting-started.md#cli).
:::
### Authenticate with Daytona
Log in to your Daytona account using your API key:
```bash
daytona login --api-key=YOUR_API_KEY
```
Replace `YOUR_API_KEY` with your actual Daytona API key.
### Create a Sandbox
Create a sandbox for running OpenClaw:
```bash
daytona sandbox create --name openclaw --snapshot daytona-medium --auto-stop 0
```
OpenClaw comes preinstalled in the default Daytona snapshot, so the command above is all you need.
:::note
The `--auto-stop 0` flag disables automatic shutdown, keeping OpenClaw accessible until you manually stop or delete the sandbox. The `daytona-medium` snapshot is required because the OpenClaw gateway needs a minimum of 2GB memory.
:::
### Connect to the Sandbox
SSH into your sandbox:
```bash
daytona ssh openclaw
```
### Run OpenClaw Onboarding
Start the onboarding process:
```bash
openclaw onboard
```
:::note
The model provider steps below are for Anthropic. If using a different provider, follow the prompts for your chosen option.
:::
Follow the prompts:
1. **Security acknowledgment:** Accept to continue
2. **Onboarding mode:** Select **Quickstart**
3. **Model/auth provider:** Select **Anthropic**
4. **Anthropic auth method:** Select **Anthropic API key**
5. **Enter Anthropic API key:** Paste your API key
6. **Default model:** Keep current (default: `anthropic/claude-opus-4-5`)
7. **Select channel:** Choose **Skip for now** (we'll configure channels later)
8. **Configure skills:** Select **No** (configure later based on your needs)
9. **Enable hooks:** Select **Skip for now** (configure later based on your needs)
10. **Gateway service:** Select **Skip** (already installed)
When onboarding finishes, the output will display a **Dashboard ready** section with a dashboard link. Your gateway token is the value after `?token=` in the URL. Save this token - you'll need it to connect to the dashboard.
Also, OpenClaw will ask you to **Install shell completion script?** - choose whatever you prefer, this is optional and doesn't affect functionality.
### Start the Gateway
Run the gateway in the background:
```bash
nohup openclaw gateway run > /tmp/gateway.log 2>&1 &
```
The `&` runs the gateway as a background process, keeping your terminal free for other commands. The `nohup` ensures the gateway keeps running even after you close the SSH connection.
### Access the Dashboard
The OpenClaw dashboard is a web interface for managing your assistant, monitoring connections, and configuring channels. To access it, you need a [preview URL](https://www.daytona.io/docs/en/preview.md) that exposes the gateway port running inside your sandbox.
In your local terminal (not inside the sandbox SSH session), generate the preview URL:
```bash
daytona preview-url openclaw --port 18789
```
This command generates a [signed preview URL](https://www.daytona.io/docs/en/preview.md#signed-preview-url) that securely exposes the port.
Open the URL in your browser, go to the **Overview** section, paste your gateway token in the **Gateway Token** field, and click **Connect**.
:::tip
The preview URL expires after 1 hour by default (customizable with `--expires` flag). When it expires, simply run the same CLI command to generate a new one.
:::
### Pair Your Browser
OpenClaw uses device pairing as a security measure - only approved devices can connect to and control your assistant. When you first attempt to connect from the dashboard, your browser registers as a new device that needs approval.
List pending device requests:
```bash
openclaw devices list
```
Approve your device:
```bash
openclaw devices approve REQUEST_ID
```
Replace `REQUEST_ID` with the value from the **Request** column.
Click **Connect** again in the dashboard.
Once connected, you should see a green status indicator - your OpenClaw is now ready to use.
### Security
Running OpenClaw this way provides three layers of security:
1. **Preview URL:** Time-limited access to the dashboard port
2. **Gateway token:** Required to authenticate with the dashboard
3. **Device approval:** Only approved devices can connect and control your assistant
Even if someone obtains your dashboard URL, they cannot connect without the gateway token and an approved device.
:::caution
Keep your gateway token and preview URL secret. Do not share them publicly.
:::
---
### Configure Telegram
Set up a Telegram bot to chat with OpenClaw.
#### Create a Telegram Bot
1. Open Telegram and search for **@BotFather**
2. Send `/start`, then `/newbot`
3. Enter a name for your bot
4. Enter a username for your bot
5. Copy the bot token provided
#### Configure OpenClaw
Enable Telegram and set your bot token:
```bash
openclaw config set channels.telegram.enabled true
openclaw config set channels.telegram.botToken YOUR_BOT_TOKEN
```
Verify the configuration:
```bash
openclaw config get channels.telegram
```
#### Restart the Gateway
```bash
openclaw gateway stop
nohup openclaw gateway run > /tmp/gateway.log 2>&1 &
```
#### Complete Verification
1. Open your bot's chat in Telegram and click **Start**
2. A pairing code will appear. Approve the pairing request:
```bash
openclaw pairing approve telegram PAIRING_CODE
```
You can now message your OpenClaw through Telegram.
---
### Configure WhatsApp
Set up WhatsApp to chat with OpenClaw.
#### Run Configuration
```bash
openclaw config --section channels
```
When prompted:
1. Select **Local (this machine)** for gateway location
2. Choose **Configure/link**
3. Select **WhatsApp (QR link)**
4. Select **Yes** for "Link WhatsApp now (QR)?"
#### Scan the QR Code
Open WhatsApp on your phone, go to **Settings → Linked Devices → Link a Device**, and scan the QR code displayed in your terminal.
Once paired, you'll see:
```
✅ Linked after restart; web session ready.
```
#### Set Up Your Phone Number
Select **This is my personal phone number** (or choose the other option if you have a separate phone for OpenClaw) and enter your phone number when prompted.
#### Finish Configuration
When prompted to select another channel, choose **Finished**. You'll see:
```
└ Configure complete.
```
#### Start Chatting
Send a message to yourself in WhatsApp - OpenClaw will respond. You can give it instructions and information on how to behave directly in the chat.
:::tip
To allow other users to chat with OpenClaw, add their phone numbers to the **Allow From** list in **Channels → WhatsApp** inside the dashboard. When they send a message, OpenClaw will respond.
:::
import { Image } from 'astro:assets'
import openclawSandbox from '../../../../../assets/docs/images/openclaw-sandbox.gif'
This guide shows how to run [OpenClaw](https://openclaw.ai/) inside a Daytona sandbox using the Daytona SDK. The script automatically creates and configures a sandbox with OpenClaw and provides an authenticated [preview URL](https://www.daytona.io/docs/en/preview.md) for using OpenClaw in the browser.
---
### 1. Workflow Overview
When you run the script, it creates a Daytona sandbox, starts the OpenClaw gateway inside it, and prints a preview link for the dashboard:
```
$ npm start
Creating Daytona sandbox...
Configuring OpenClaw...
Starting OpenClaw...
(Ctrl+C to shut down and delete the sandbox)
🔗 Secret link to Control UI: https://18789-xxxx.proxy.daytona.works?token=...
```
Open the provided link in your browser to connect to the OpenClaw Control UI. This link contains a configuration token, and anyone can use it to connect to OpenClaw without device approval.
You can use the Control UI to chat with your assistant, configure Telegram and WhatsApp, and manage sessions. When you exit the script (Ctrl+C), the sandbox will not be deleted unless [sandbox persistence is disabled](https://www.daytona.io/docs/en/guides/openclaw/openclaw-sdk-sandbox.md#4-key-constants).
### 2. Project Setup
#### Clone the Repository
Clone the Daytona [repository](https://github.com/daytonaio/daytona) and go to the example directory:
```bash
git clone https://github.com/daytonaio/daytona.git
cd daytona/guides/typescript/openclaw
```
#### Configure Environment
Get your API key from the [Daytona Dashboard](https://app.daytona.io/dashboard/keys).
Copy `.env.example` to `.env` and add your Daytona API key:
```bash
DAYTONA_API_KEY=your_daytona_key
```
A default OpenClaw configuration is stored in `openclaw.json`. You can customize it according to the [configuration reference](https://docs.openclaw.ai/gateway/configuration-reference). You can also add additional environment variables to `.env.sandbox` (e.g. `ANTHROPIC_API_KEY` for Claude) and they will be loaded into the sandbox.
#### Run the Example
:::note[Node.js]
Node.js 18 or newer is required.
:::
Install dependencies and run:
```bash
npm install
npm start
```
The script creates the sandbox, starts the OpenClaw gateway, and prints a secret link with the token in the URL.
### 3. How It Works
1. The script creates a [Daytona sandbox](https://www.daytona.io/docs/en/sandboxes.md) with `DAYTONA_SNAPSHOT` (e.g. `daytona-medium`) and loads env vars from `.env.sandbox`.
2. Your local `openclaw.json` is merged with built-in config and written to `~/.openclaw/openclaw.json` in the sandbox.
3. The OpenClaw gateway is started inside the sandbox on `OPENCLAW_PORT` via [process execution](https://www.daytona.io/docs/en/process-code-execution.md).
4. A signed [preview link](https://www.daytona.io/docs/en/preview.md) is generated and the token is appended as `?token=...`; this link is printed so you can open the Control UI.
5. On Ctrl+C, the sandbox is deleted unless `PERSIST_SANDBOX` is `true`.
### 4. Key Constants
You can change behavior by editing the constants in [`src/index.ts`](https://github.com/daytonaio/daytona/blob/main/guides/typescript/openclaw/src/index.ts):
| Constant | Default | Description |
|----------|---------|-------------|
| `PERSIST_SANDBOX` | true | When true, the sandbox is not deleted when the script exits |
| `MAKE_PUBLIC` | true | Allow anyone to access the sandbox instead of limiting to your Daytona organization |
| `OPENCLAW_PORT` | 18789 | OpenClaw Gateway and Control UI port |
| `SHOW_LOGS` | true | Stream OpenClaw stdout/stderr to the terminal |
**Key advantages:**
- Secure, isolated execution in a Daytona sandbox
- No device approval — token in URL and `allowInsecureAuth` skip pairing
- Control UI and channels accessible via the secret preview link
- Optional: keep the sandbox running after exit (`PERSIST_SANDBOX`)
Guides for running [OpenCode](https://opencode.ai/) with Daytona.
import { Image } from 'astro:assets'
import opencodePluginGif from '../../../../../assets/docs/images/opencode-plugin.gif'
This guide demonstrates how to run the [Daytona OpenCode plugin](https://www.npmjs.com/package/@daytona/opencode) which integrates Daytona sandboxes and OpenCode. When the plugin is active, all agent operations occur in secure sandboxes, with one sandbox per OpenCode session. The plugin also has the ability to sync changes between sandboxes and local Git branches.
### 1. Workflow Overview
When you run OpenCode with the Daytona plugin, sandboxes are created automatically inside OpenCode sessions. Operations such as running code, installing dependencies, and starting servers occur in the sandbox.
Sandboxes are preserved until you delete the OpenCode session. If a local Git repository is detected, the plugin syncs changes between the sandbox and branches with the `opencode/` prefix.
### 2. Project Setup
#### Add the Plugin
Add the Daytona plugin to your project by creating or editing `opencode.json` in the project directory:
```json
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["@daytona/opencode"]
}
```
OpenCode downloads the plugin automatically when it starts. To install the plugin globally instead, edit `~/.config/opencode/opencode.json` and add the same `plugin` entry.
#### Configure Environment
This plugin requires a [Daytona account](https://www.daytona.io/) and [Daytona API key](https://app.daytona.io/dashboard/keys) to create sandboxes.
Set your API key:
```bash
export DAYTONA_API_KEY="your-api-key"
```
Or create a `.env` file in your project root:
```env
DAYTONA_API_KEY=your-api-key
```
#### Run OpenCode
:::note[Git required]
Ensure your project is a Git repository to enable syncing changes between the sandbox and your machine.
:::
Initialize Git if needed, then start OpenCode:
```bash
git init
opencode
```
You can now use OpenCode as usual. As you work, you will see notifications in OpenCode indicating sandboxes are being created and changes are being synced to local branches.
To confirm the plugin is working, type `pwd` in the chat and you should see a path like `/home/daytona/project`.
To view live logs from the plugin for debugging:
```bash
tail -f ~/.local/share/opencode/log/daytona.log
```
#### Version control
In your project directory, use Git to list and check out the branches OpenCode creates:
```bash
git branch
git checkout opencode/1
```
By default, new sessions start from the branch that was checked out when OpenCode was started. After this, synchronization only goes one way: from the sandbox to your local branch. To start working from a different branch, use git to check out that branch and start OpenCode again:
```bash
git checkout opencode/1
opencode
```
You can run as many OpenCode sessions in parallel as you want. Use Git to review and merge changes.
### 3. Understanding the Plugin Architecture
The Daytona plugin consists of several modules that provide custom tools, hooks for events, connections from sessions to sandboxes and system prompt transforms. These modules ensure every agent action runs in a [Daytona sandbox](https://www.daytona.io/docs/en/sandboxes.md) and that changes sync to local Git branches:
- **Custom tools:** Overrides bash, read, write, edit, etc., so they run in the sandbox.
- **System prompt transform:** Injects sandbox path and instructions into the agent's system prompt.
- **Event handlers:** Handles session lifecycle events including cleanup (deleting sandboxes) and idle auto-commit (syncing changes to local git branches).
- **Session management:** Manages the creation and deletion of sandboxes and the mapping of sessions to sandboxes.
#### Custom tools
The custom tools module registers overrides for OpenCode's built-in tools so that every file and process operation goes through the Daytona SDK. It receives the project `id` and `worktree` from the plugin context and returns a tool map:
```typescript
export async function customTools(ctx: PluginInput, sessionManager: DaytonaSessionManager) {
logger.info('OpenCode started with Daytona plugin')
const projectId = ctx.project.id
const worktree = ctx.project.worktree
return {
bash: bashTool(sessionManager, projectId, worktree, ctx),
// ... read, write, edit, multiedit, patch, ls, glob, grep, lsp, getPreviewURL
}
}
```
For example, the plugin implementation of the bash tool uses the Daytona SDK to run the command in the sandbox:
```typescript
async execute(args: { command: string; background?: boolean }, ctx: ToolContext) {
const sessionId = ctx.sessionID
const sandbox = await sessionManager.getSandbox(sessionId, projectId, worktree, pluginCtx)
if (args.background) {
// ... create or get exec session, then:
const result = await sandbox.process.executeSessionCommand(execSessionId, {
command: args.command,
runAsync: true,
})
return `Command started in background (cmdId: ${result.cmdId})`
} else {
const result = await sandbox.process.executeCommand(args.command, repoPath)
return `Exit code: ${result.exitCode}\n${result.result}`
}
}
```
All stateful tools (bash, read, write, edit, glob, grep, ls, lsp, multiedit, patch) are overridden the same way. The plugin also adds a custom tool for [preview links](https://www.daytona.io/docs/en/preview.md).
#### System prompt transform
The system prompt transform extends the system prompt to include instructions for the agent to work in the sandbox and use the background option for long-running commands:
```typescript
export async function systemPromptTransform(ctx: PluginInput, repoPath: string) {
return async (input: ExperimentalChatSystemTransformInput, output: ExperimentalChatSystemTransformOutput) => {
output.system.push(
[
'## Daytona Sandbox Integration',
'This session is integrated with a Daytona sandbox.',
`The main project repository is located at: ${repoPath}.`,
'Bash commands will run in this directory.',
'Put all projects in the project directory. Do NOT try to use the current working directory of the host system.',
"When executing long-running commands, use the 'background' option to run them asynchronously.",
'Before showing a preview URL, ensure the server is running in the sandbox on that port.',
].join('\n'),
)
}
}
```
#### Session events
The session events handler listens for OpenCode session lifecycle events and handles them appropriately. When you delete a session, the handler cleans up the corresponding sandbox. When a session becomes idle, it triggers auto-commit and sync:
```typescript
export async function eventHandlers(ctx: PluginInput, sessionManager: DaytonaSessionManager, repoPath: string) {
const projectId = ctx.project.id
const worktree = ctx.project.worktree
return async (args: any) => {
const event = args.event
if (event.type === EVENT_TYPE_SESSION_DELETED) {
const sessionId = (event as EventSessionDeleted).properties.info.id
await sessionManager.deleteSandbox(sessionId, projectId)
toast.show({ title: 'Session deleted', message: 'Sandbox deleted successfully.', variant: 'success' })
} else if (event.type === EVENT_TYPE_SESSION_IDLE) {
const sessionId = event.properties.sessionID
const sandbox = await sessionManager.getSandbox(sessionId, projectId, worktree, ctx)
const branchNumber = sessionManager.getBranchNumberForSandbox(projectId, sandbox.id)
if (!branchNumber) return
const sessionGit = new SessionGitManager(sandbox, repoPath, worktree, branchNumber)
await sessionGit.autoCommitAndPull(ctx)
}
}
}
```
#### File synchronization
While OpenCode is in use, the plugin uses Git to keep session sandboxes and your local Git repository in sync. This only occurs if a git repository is detected in the project directory. On plugin start:
1. The plugin looks for a Git repository in the local directory.
2. A parallel repository is created in the sandbox with a single `opencode` branch, mirroring your current local branch.
3. A `sandbox` remote is added to your local repo using an SSH connection to the sandbox.
4. Your current `HEAD` is pushed to `opencode`, and the sandbox repo is reset to that state.
5. On session idle, the plugin commits in the sandbox on `opencode`, then pulls into a local branch (`opencode/1`, `opencode/2`, etc.) which is unique to each sandbox. A notification is shown when the sync is complete.
For more information on how the sync is implemented, see the [SessionGitManager](https://github.com/daytonaio/daytona/blob/main/libs/opencode-plugin/.opencode/plugin/daytona/git/session-git-manager.ts) class.
:::caution
When the plugin syncs to local `opencode` branches, any local changes on those branches are overwritten.
:::
#### Session storage
The session manager stores which sandbox belongs to each project in JSON files (using the same base path as OpenCode via `xdg-basedir`).
- **macOS:** `~/.local/share/opencode/storage/daytona/[projectid].json`
- **Windows:** `%LOCALAPPDATA%\opencode\storage\daytona\[projectid].json`
Each file holds sandbox metadata for that project's sessions so that sandboxes are retained between OpenCode uses.
**Key advantages:**
- Secure, isolated execution: each OpenCode session runs in its own Daytona sandbox
- Sandboxes persist until you delete the OpenCode session
- Live [preview links](https://www.daytona.io/docs/en/preview.md) when a server starts in the sandbox
- Automatic git sync to local branches so you can review and merge agent changes
- No script to run: add the plugin and use OpenCode as usual
import { Image } from 'astro:assets'
import opencodeSdkSyncboard from '../../../../../assets/docs/images/opencode-sdk-syncboard.gif'
This guide walks you through running the [OpenCode](https://opencode.ai/) autonomous coding agent inside a secure Daytona sandbox. The OpenCode server runs fully isolated in the sandbox and the script connects to the server using the [OpenCode SDK](https://www.npmjs.com/package/@opencode-ai/sdk).
The agent can develop full-stack web apps, write code in any language, install dependencies, and run scripts. It can also start and manage dev servers, and generate [preview links](https://www.daytona.io/docs/en/preview.md) for live apps.
### 1. Workflow Overview
When you run the script, it creates a Daytona sandbox, installs OpenCode, and starts the OpenCode server inside the sandbox. You send prompts and see responses in your terminal. On exit, the sandbox is deleted.
```
$ npm run start
Creating sandbox...
Installing OpenCode in sandbox...
Preview: https://4096-0626a08c-08e7-4fbe-9799-d160bac66b08.daytonaproxy01.net
Press Ctrl+C at any time to exit.
User: Create a minimalist collaborative whiteboard app
Thinking...
🔨 ✓ Run: Check current directory structure
✓ 3 todos
📝 Add home/daytona/SPEC.md
✓ 2 todos
📝 Add home/daytona/index.html
✓ 1 todos
✓ home/daytona/SPEC.md
✓ https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@400;500&family=Outfit:wght@400;500;700&display=swap (text/css; charset=utf-8)
✓ home/daytona/index.html
🔨 ✓ Run: List files in directory
🔨 ✓ Run: Start HTTP server for whiteboard
🔨 ✓ Run: Check if server is running
✓ 0 todos
Your minimalist collaborative whiteboard app is ready!
**Preview URL:** https://8080-0626a08c-08e7-4fbe-9799-d160bac66b08.daytonaproxy01.net
## SyncBoard - Features
**Drawing Tools:**
- Pencil (freehand drawing)
- Line, Rectangle, Ellipse (shape tools with preview)
- Eraser (removes intersecting strokes)
**Customization:**
- 6 vibrant color presets
- 3 stroke widths (thin/medium/thick)
- Keyboard shortcuts: `P` `L` `R` `E` `X` for tools, `1` `2` `3` for stroke widths
**Collaboration:**
- Simulated real-time collaboration with other users
- Unique user colors for each participant
- Auto-saves to localStorage every 2 seconds
**Actions:**
- Undo/Redo (stores 20 actions, Ctrl+Z / Ctrl+Shift+Z)
- Clear canvas (with confirmation dialog)
- Coordinates display in status bar
**Design:**
- Dark theme with coral/teal accents
- Dot grid background pattern
- Smooth animations and hover effects
- Responsive layout for mobile devices
User:
Cleaning up...
```
When your task involves running or previewing a web application, the agent can host the app and generate a [preview link](https://www.daytona.io/docs/en/preview.md) for you to inspect the live result:
You can continue interacting with your agent until you are finished. When you exit the program, the sandbox will be deleted automatically.
### 2. Project Setup
#### Clone the Repository
First, clone the Daytona [repository](https://github.com/daytonaio/daytona.git) and navigate to the OpenCode SDK example:
```bash
git clone https://github.com/daytonaio/daytona.git
cd daytona/guides/typescript/opencode/opencode-sdk
```
#### Configure Environment
Get your API key from the [Daytona Dashboard](https://app.daytona.io/dashboard/keys).
Copy `.env.example` to `.env` and add your key:
```bash
DAYTONA_API_KEY=your_daytona_key
```
#### Local Usage
:::note[Node.js Version]
Node.js 18 or newer is required to run this example. Please ensure your environment meets this requirement before proceeding.
:::
Install dependencies and run the example:
```bash
npm install
npm run start
```
The agent will start and wait for your prompt.
### 3. Understanding the Script
OpenCode uses a client-server model: The [server](https://opencode.ai/docs/server/) manages coding agents, sessions and configuration. Clients communicate with the server [over HTTP](https://opencode.ai/docs/server/#apis) to run prompts in sessions and receive streamed responses.
#### Initialization
On startup, the script:
1. Creates a new [Daytona sandbox](https://www.daytona.io/docs/en/sandboxes.md).
2. Installs OpenCode globally in the sandbox.
3. Starts the OpenCode server in the sandbox.
4. Gets the preview URL for the server and creates a client with that base URL.
5. Enters the readline loop to send prompts to the server and receive streamed responses.
6. On Ctrl+C, deletes the sandbox and exits.
#### OpenCode Server
The OpenCode server runs inside the sandbox and handles all agent work: LLM calls, tools (bash, write, etc.), and code execution. The script starts it with:
```ts
const envVar = injectEnvVar('OPENCODE_CONFIG_CONTENT', opencodeConfig)
const command = await sandbox.process.executeSessionCommand(sessionId, {
command: `${envVar} opencode serve --port ${PORT} --hostname ${HOSTNAME}`,
runAsync: true,
})
```
A custom configuration is injected which includes a system prompt. The system prompt tells the agent it runs in a sandbox, to use `/home/daytona` for file operations, and the preview URL pattern so it can provide correct preview links.
#### OpenCode Client
The client runs on your machine and talks to the server using the [OpenCode SDK](https://opencode.ai/docs/sdk/). `Session.create` sets up the SDK client and subscribes to the event stream so the client can show tool activity (e.g. “✓ Run: …”, “📝 Add …”) as the agent works:
```ts
const client = createOpencodeClient({ baseUrl })
const sessionRes = await client.session.create({ body: { title: 'Daytona query' } })
const sessionId = sessionRes.data?.id
if (!sessionId) throw new Error('Failed to create OpenCode session:' + sessionRes.error)
const events = await client.event.subscribe()
```
To send a prompt, the client calls `session.prompt` and processes events from the existing stream to show tool activity:
```ts
const promptPromise = this.client.session.prompt({
path: { id: this.sessionId },
body: { parts: [{ type: 'text', text: query } satisfies TextPartInput] },
})
for await (const event of takeUntil(this.events.stream, promptPromise)) {
printEvent(this.sessionId, event)
}
```
#### Main loop
When the server is ready, the script creates a session and runs a readline loop:
```ts
const session = await Session.create(baseUrl)
const rl = readline.createInterface({ input: process.stdin, output: process.stdout })
while (true) {
const query = await new Promise((resolve) => rl.question('User: ', resolve))
if (!query.trim()) continue
await session.runQuery(query)
}
```
The readline loop waits for user input, sends it to the agent, and prints the response.
**Key advantages:**
- Secure, isolated execution in Daytona sandboxes
- OpenCode SDK client in your terminal; server runs in the sandbox
- Support for 75+ LLM providers
- All agent code execution happens inside the sandbox
- Automatic preview link generation for deployed services
- Custom agent configuration for Daytona-specific workflows
import { Image } from 'astro:assets'
import opencodeResult from '../../../../../assets/docs/images/opencode-web-agent.gif'
This guide demonstrates how to run the [OpenCode](https://opencode.ai/) coding agent inside a Daytona sandbox environment using OpenCode's easy-to-use [web interface](https://opencode.ai/docs/web/).
The agent can develop web apps, write code in any language, install dependencies, and run scripts. It supports over 75 different LLM providers and can start dev servers with preview links for live apps.
---
### 1. Workflow Overview
When you launch the main script, a Daytona sandbox is created and OpenCode is installed inside it. OpenCode is configured with a custom Daytona-aware agent.
The script provides a preview link to access the web interface, where you can create, configure and interact with agent sessions:
```
$ npm run start
Creating sandbox...
Installing OpenCode...
Starting OpenCode web server...
Press Ctrl+C to stop.
▄
█▀▀█ █▀▀█ █▀▀█ █▀▀▄ █▀▀▀ █▀▀█ █▀▀█ █▀▀█
█░░█ █░░█ █▀▀▀ █░░█ █░░░ █░░█ █░░█ █▀▀▀
▀▀▀▀ █▀▀▀ ▀▀▀▀ ▀ ▀ ▀▀▀▀ ▀▀▀▀ ▀▀▀▀ ▀▀▀▀
Web interface: https://3000-1e0f775c-c01b-40e7-8c64-062fd3dadd75.proxy.daytona.works/
```
The agent can host web apps and provide you with a preview link using the [Daytona Preview Links](https://www.daytona.io/docs/en/preview-and-authentication.md) feature. When your task involves running or previewing a web application, the agent automatically reasons about this need, hosts the app, and generates a preview link for you to inspect the live result:
You can continue interacting with your agent until you are finished. When you exit the program, the sandbox will be deleted automatically.
### 2. Project Setup
#### Clone the Repository
First, clone the Daytona [repository](https://github.com/daytonaio/daytona.git) and navigate to the example directory:
```bash
git clone https://github.com/daytonaio/daytona.git
cd daytona/guides/typescript/opencode
```
#### Configure Environment
Get your API key from the [Daytona Dashboard](https://app.daytona.io/dashboard/keys).
Copy `.env.example` to `.env` and add your key:
```bash
DAYTONA_API_KEY=your_daytona_key
```
#### Local Usage
:::note[Node.js Version]
Node.js 18 or newer is required to run this example. Please ensure your environment meets this requirement before proceeding.
:::
Install dependencies:
```bash
npm install
```
Run the example:
```bash
npm run start
```
The OpenCode web interface will start and wait for you to open it in your browser.
### Models and API Providers
OpenCode works with [over 75 LLM providers](https://opencode.ai/docs/providers/), with a free provider selected by default. You can change the model or provider at any time using the menu below the prompt input in the web interface. If your chosen provider needs an API key, you’ll be prompted to enter it.
#### Persisting API Keys
To persist API keys between uses of the script, you can set them as environment variables when creating the Daytona sandbox.
For example, to use an Anthropic API key, modify the `daytona.create()` call in `src/index.ts` to include your desired API key:
```typescript
sandbox = await daytona.create({
envVars: {
ANTHROPIC_API_KEY: process.env.ANTHROPIC_API_KEY || '',
},
})
```
### 3. Understanding the Script
This example consists of a Node.js script that installs, configures and runs OpenCode inside a Daytona sandbox.
#### Initialization
On initialization, the main script:
1. Creates a new [Daytona sandbox](https://www.daytona.io/docs/en/sandboxes.md).
2. Installs OpenCode globally inside the sandbox using npm with [process execution](https://www.daytona.io/docs/en/process-code-execution.md#process-execution).
3. Creates and uploads a [custom agent configuration](https://opencode.ai/docs/agents/) with Daytona-specific system prompt.
4. Starts the OpenCode web server inside the sandbox on port 3000.
5. Substitutes the URL in OpenCode's output with a [Daytona preview link](https://www.daytona.io/docs/en/preview-and-authentication.md) for the web interface.
#### Main Script Code
The script creates a session and executes OpenCode as an asynchronous command, which allows it to stream output while keeping the process alive:
```typescript
const command = await sandbox.process.executeSessionCommand(sessionId, {
command: `${envVar} opencode web --port ${OPENCODE_PORT}`,
runAsync: true,
})
```
When OpenCode starts its web server, it prints a link to its web UI using a localhost address (e.g., `http://127.0.0.1:3000`). However, since the sandbox runs remotely, this localhost link is only accessible inside the sandbox itself. To solve this, the script parses OpenCode’s output, and replaces the URL with the corresponding Daytona preview link.
```typescript
const opencodePreviewLink = await sandbox.getPreviewLink(OPENCODE_PORT)
const replaceUrl = (text: string) =>
text.replace(
new RegExp(`http:\\/\\/127\\.0\\.0\\.1:${OPENCODE_PORT}`, 'g'),
opencodePreviewLink.url
)
```
#### OpenCode Agent Configuration
A custom system prompt is used to instruct the agent on how to use Daytona sandbox paths and preview links. This prompt is packaged into a JSON configuration string, which is passed to the sandbox as the `OPENCODE_CONFIG_CONTENT` environment variable:
```json
{
"$schema": "https://opencode.ai/config.json",
"default_agent": "daytona",
"agent": {
"daytona": {
"description": "Daytona sandbox-aware coding agent",
"mode": "primary",
"prompt": "You are running in a Daytona sandbox. Use the /home/daytona directory instead of /workspace for file operations. When running services on localhost, they will be accessible as: . When starting a server, always give the user the preview URL to access it. When starting a server, start it in the background with & so the command does not block further instructions."
}
}
}
```
The `` in the agent prompt is a template URL where `{PORT}` is a placeholder for the port to access on the Daytona sandbox. This template string is created by generating a [preview link](https://www.daytona.io/docs/en/preview-and-authentication.md) for a specific port number and then replacing the port number with `{PORT}`.
#### Clean up
When you press `Ctrl+C`, the script automatically cleans up by deleting the sandbox:
```typescript
process.once('SIGINT', async () => {
console.log('\nCleaning up...')
if (sandbox) await sandbox.delete()
process.exit(0)
})
```
**Key advantages:**
- Secure, isolated execution in Daytona sandboxes
- OpenCode Web interface accessible via browser
- Support for 75+ LLM providers
- All agent code execution happens inside the sandbox
- Automatic preview link generation for deployed services
- Custom agent configuration for Daytona-specific workflows
- Clean resource management with automatic sandbox cleanup
import { Image } from 'astro:assets'
import rewardsPlot from '../../../../../assets/docs/images/openenv-finqa-rewards-plot.png'
Reinforcement learning from verified rewards is driving rapid progress in reasoning, tool use, and code generation. Agents interact with environments that provide ground-truth feedback, but this requires scalable, isolated environments that can run in parallel. [OpenEnv](https://github.com/meta-pytorch/OpenEnv) is a framework for building and serving RL environments that addresses this, decoupling the environment from the training loop so each instance can run in its own container. It has native support for Daytona sandboxes, enabling parallel execution across many isolated instances.
This guide demonstrates the OpenEnv + Daytona integration through [FinQA](https://huggingface.co/datasets/snorkelai/finqa-data), a multi-turn, tool-calling environment contributed by [Snorkel AI](https://snorkel.ai/) to OpenEnv. FinQA presents the model with financial questions based on SEC 10-K filings and gives it SQL tools to explore the underlying data before submitting an answer.
---
### 1. OpenEnv and Daytona
OpenEnv environments are standalone servers that expose a Gymnasium-style API (`reset`, `step`, `state`) over the network. The environment logic (e.g. a FinQA database with SQL tools, a code execution sandbox, a web browsing agent) runs inside a container; the training loop connects as a remote client. This separation means the environment and the trainer don't need to share a process, a machine, or even a language; they communicate over a standard protocol.
In MCP-enabled environments like FinQA, tools are exposed via MCP (Model Context Protocol) over JSON-RPC. Clients discover available tools at runtime via `tools/list` and invoke them via `tools/call`. This is how the model's tool calls in the training loop get routed to the actual environment running inside a container.
OpenEnv has a pluggable provider model for _where_ environments run: local Docker, Docker Swarm, or Daytona. The `DaytonaProvider` launches each environment instance as a Daytona sandbox, which gives you API-driven lifecycle management and the ability to run thousands of instances in parallel without local infrastructure. In the training loop below, each sandbox is an independent FinQA environment with its own persistent WebSocket connection.
### 2. Workflow Overview
We cover two modes of using the FinQA environment:
- **`run.py`** — A single-episode demo that creates one sandbox, runs one complete interaction, and tears down. Useful for understanding the environment and verifying your setup.
- **`train.py`** — A full GRPO training loop that creates 500 sandboxes, collects multi-turn rollouts in parallel with batched vLLM generation, runs policy gradient updates with LoRA, and hot-swaps adapters into vLLM between iterations.
Both modes use the same underlying episode structure. Each episode is a multi-turn, tool-calling interaction:
1. **Reset**: A sandbox starts a new episode with a random financial question about a company
2. **Explore**: The model calls tools to discover tables, inspect schemas, and run SQL queries against that company's 10-K data
3. **Submit**: After gathering enough data, the model calls `submit_answer` with its computed answer
4. **Reward**: The environment returns a binary reward (1.0 = correct, 0.0 = wrong)
The available tools are:
| Tool | Description |
| ------------------------------------------ | ----------------------------------------------- |
| `get_descriptions(company_name)` | List available tables for a company |
| `get_table_info(company_name, table_name)` | Get column names and types |
| `sql_query(query)` | Run a SQL query against the company's 10-K data |
| `submit_answer(answer)` | Submit a final answer (terminates the episode) |
### 3. Setup
#### Clone the Repository
:::note[GPU Requirement]
The full training requires 4 GPUs with 80GB+ VRAM each (2 for vLLM inference, 2 for training). The single-episode demo (`run.py`) does not require a GPU.
:::
Clone the [Daytona repository](https://github.com/daytonaio/daytona.git) and navigate to the example directory:
```bash
git clone https://github.com/daytonaio/daytona.git
cd daytona/guides/python/reinforcement-learning/openenv
```
#### Create Virtual Environment
:::note[Python Version]
Python 3.10 or higher is required.
:::
```bash
python3.10 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
```
#### Install Dependencies
```bash
pip install -e .
```
This installs:
- `daytona` - Daytona SDK for sandbox management
- `openenv-core` - OpenEnv runtime (WebSocket-based RL environment protocol)
- `openenv-finqa-env` - The FinQA environment client (`FinQAEnv`, `CallToolAction`)
- `python-dotenv` - Environment variable management
For training, also install the training extras:
```bash
pip install -e ".[train]"
```
This adds: `torch`, `transformers`, `vllm`, `peft` (for LoRA), and `numpy`. See [Running the Training](#6-running-the-training) for additional system requirements before starting `train.py`.
#### Configure Environment
Get your Daytona API key from the [Daytona Dashboard](https://app.daytona.io/dashboard/keys) and create a `.env` file:
```bash
DAYTONA_API_KEY=your_daytona_api_key
```
#### Build the Snapshot
Before running any episodes, build a Daytona snapshot that pre-packages the FinQA environment server and dataset:
```bash
python build_snapshot.py
# Or, with a custom snapshot name:
python build_snapshot.py --snapshot-name my-finqa
```
This uses Daytona's declarative Image API to clone the FinQA environment from the OpenEnv repo, install its dependencies, and pre-download the FinQA dataset from HuggingFace, all baked into a single container image for fast sandbox startup.
:::note
If you use a custom snapshot name, you'll need to update the `SNAPSHOT` constant in `run.py` and pass the `--snapshot` flag when running `train.py`.
:::
### 4. Running a Single Episode
The `run.py` script demonstrates the full OpenEnv + Daytona integration in a single episode. Run it with:
```bash
python run.py
```
Let's walk through the key components.
#### Sandbox Creation
The `DaytonaProvider` from OpenEnv wraps the Daytona SDK, creating a sandbox from the pre-built snapshot and waiting for the FinQA server to become healthy:
```python
from openenv.core.containers.runtime.daytona_provider import DaytonaProvider
def create_sandbox():
provider = DaytonaProvider(auto_stop_interval=0, cmd=SERVER_CMD)
url = provider.start_container(f"snapshot:{SNAPSHOT}")
provider.wait_for_ready(url, 120)
return provider, url
```
#### Connecting to the Environment
OpenEnv communicates over WebSocket. The `FinQAEnv` client handles the connection, and `env.reset()` starts a new episode with a random question:
```python
from finqa_env import CallToolAction, FinQAEnv
async with FinQAEnv(base_url=url) as env:
await env.reset()
# Get the question and company for this episode
state = await env._send_and_receive({"type": "state"})
data = state.get("data", {})
question = data.get("current_question", "")
company = data.get("current_company", "")
```
#### Two API Styles
OpenEnv provides two ways to interact with the environment:
**`call_tool()`** — for exploration, returns the raw result with no RL tracking:
```python
# Discover available tables
descriptions = await env.call_tool("get_descriptions", company_name=company)
table_names = json.loads(descriptions)
# Inspect a table's schema
table_info = await env.call_tool(
"get_table_info", company_name=company, table_name=table_names[0]
)
```
**`step()`** — wraps the tool call in an RL-style `StepResult` with `.observation.done` and `.observation.reward`:
```python
# Run a SQL query (with RL reward/done tracking)
query = f'SELECT * FROM "{table_names[0]}" LIMIT 5'
step_result = await env.step(
CallToolAction(tool_name="sql_query", arguments={"query": query})
)
obs = step_result.observation
print(f"SQL result (done={obs.done}, reward={obs.reward})")
# Submit a final answer (terminates the episode)
step_result = await env.step(
CallToolAction(tool_name="submit_answer", arguments={"answer": "0"})
)
obs = step_result.observation
print(f"Submitted (done={obs.done}, reward={obs.reward})")
```
Use `call_tool()` when exploring, and `step()` when you need reward/done signals (e.g., in a training loop).
#### Expected Output
```
Creating sandbox from snapshot 'openenv-finqa'...
Waiting for server health check...
Server healthy.
Question: What was the total revenue for fiscal year 2023?
Company: ExampleCorp
Tables: ['income_statement', 'balance_sheet', 'cash_flow']
Schema: {"columns": [{"name": "fiscal_year", "type": "INTEGER"}, ...]}
SQL result (done=False, reward=0.0):
[{"fiscal_year": 2023, "revenue": 45200, ...}, ...]
Submitted (done=True, reward=0.0)
============================================================
Episode complete
Question: What was the total revenue for fiscal year 2023?
Reward: 0.0
Steps: 2
============================================================
Cleaning up sandbox...
Done.
```
### 5. Understanding the Training Code
The `train.py` script (~1800 lines) implements end-to-end GRPO training with parallel rollout collection across hundreds of sandboxes. Let's walk through its key components.
#### System Prompt
The model is instructed to act as a financial analyst, using tools iteratively to gather data before answering:
```python
SYSTEM_PROMPT = """\
You are a financial analyst assistant answering questions about SEC 10-K filings.
Think and reason step by step. Iteratively gather data using the available tools until you have enough information to answer the question.
When submitting your final answer:
- Provide ONLY the numerical value. No explanations, units, or LaTeX formatting.
- Always express percentages, growth rates, and percentage point differences as decimal ratios by dividing by 100 (e.g., 22% → 0.22, -8.9% → -0.089, a 4.5 percentage point difference → 0.045).
- Submit numbers exactly as they appear in the query results. Do not convert units (e.g., if the table shows values in millions, submit the number as-is, not multiplied out).
- For multi-year answers, use: year: value, year: value (e.g., 2022: 0.933, 2023: 0.930, 2024: 0.931)
- For year-over-year changes, use: year to year: value (e.g., 2022 to 2023: 0.189, 2023 to 2024: 0.025)
- For single values, just submit the number (e.g., 0.895 or -77 or 63)
- If the question is yes/no, answer Yes or No"""
```
#### Tool Schema Fetching
Tool schemas are fetched dynamically from a connected environment via MCP JSON-RPC over WebSocket, then converted to OpenAI function-calling format for use with the chat template:
```python
async def fetch_tools_from_env(env: FinQAEnv) -> list[dict]:
resp = await env._send_and_receive(
{
"type": "mcp",
"data": {"jsonrpc": "2.0", "method": "tools/list", "params": {}, "id": 1},
}
)
mcp_tools = resp["data"]["result"]["tools"]
# Convert each tool to OpenAI function-calling format
openai_tools = []
for t in mcp_tools:
schema = t.get("inputSchema") or t.get("input_schema") or {}
properties = {}
required = []
if "properties" in schema:
for name, prop in schema["properties"].items():
properties[name] = {
"type": prop.get("type", "string"),
"description": prop.get("description", ""),
}
required = schema.get("required", [])
openai_tools.append(
{
"type": "function",
"function": {
"name": t["name"],
"description": t.get("description", ""),
"parameters": {
"type": "object",
"properties": properties,
"required": required,
},
},
}
)
return openai_tools
```
#### Sandbox Pool Management
The training creates hundreds of sandboxes upfront from the pre-built snapshot, with staggered launches to stay under API rate limits:
```python
async def create_sandbox_pool(
n: int, snapshot_name: str, semaphore: asyncio.Semaphore
):
pool_by_idx: list[tuple | None] = [None] * n
async def create_one(idx: int):
async with semaphore:
provider = DaytonaProvider(auto_stop_interval=0, cmd=SERVER_CMD)
url = await asyncio.to_thread(
provider.start_container, f"snapshot:{snapshot_name}"
)
for attempt in range(3):
try:
await asyncio.to_thread(provider.wait_for_ready, url, 120)
break
except Exception:
if attempt == 2:
raise
await asyncio.sleep(3)
pool_by_idx[idx] = (provider, url)
# Stagger launches (10 at a time with 1s sleep) to stay under rate limits
tasks = []
for i in range(n):
tasks.append(asyncio.create_task(create_one(i)))
if (i + 1) % 10 == 0:
await asyncio.sleep(1.0)
await asyncio.gather(*tasks, return_exceptions=True)
return [entry for entry in pool_by_idx if entry is not None]
```
After creation, persistent WebSocket connections are opened to all sandboxes with extended ping timeouts to survive long vLLM generation steps:
```python
async def connect_envs(pool, play_sem: asyncio.Semaphore) -> list[FinQAEnv]:
envs: list[FinQAEnv | None] = [None] * len(pool)
async def connect_one(i: int, url: str):
async with play_sem:
env = FinQAEnv(base_url=url)
await env.connect()
# Extend ping timeout to survive long vLLM generation steps
if hasattr(env, "_ws") and env._ws is not None:
env._ws.ping_timeout = 300
envs[i] = env
await asyncio.gather(
*[connect_one(i, url) for i, (_, url) in enumerate(pool)]
)
return [env for env in envs if env is not None]
```
With 500 long-lived WebSocket connections, some will inevitably go stale mid-training (network blips, server-side timeouts, etc.). The `reconnect_envs` function runs a periodic health-check sweep: it sends a lightweight state ping to every connection, and any socket that doesn't respond within 5 seconds gets closed and replaced. Connections with in-flight episode requests are skipped to avoid WebSocket message interleaving, where a ping response and a step response arrive on the same socket and get delivered to the wrong awaiter:
```python
async def reconnect_envs(
envs: list[FinQAEnv], pool, skip_indices: set[int] | None = None,
) -> list[FinQAEnv]:
reconnected = 0
skip = skip_indices or set()
async def check_and_reconnect(i: int):
nonlocal reconnected
env = envs[i]
try:
# Quick health check — if the WS is alive this returns fast
await asyncio.wait_for(
env._send_and_receive({"type": "state"}), timeout=5.0
)
except Exception:
# Connection is dead — close and reopen
try:
await env.close()
except Exception:
pass
_, url = pool[i]
new_env = FinQAEnv(base_url=url)
await new_env.connect()
if hasattr(new_env, "_ws") and new_env._ws is not None:
new_env._ws.ping_timeout = 300
envs[i] = new_env
reconnected += 1
await asyncio.gather(
*[check_and_reconnect(i) for i in range(len(envs)) if i not in skip],
return_exceptions=True,
)
```
#### Multi-Turn Rollout Collection
The `collect_rollouts` function is the heart of the training loop (~430 lines). It keeps all sandboxes continuously occupied, using a sophisticated async event loop:
1. **Dynamic refill**: As soon as one episode finishes on a sandbox, a new one starts immediately
2. **Batched vLLM generation**: Episodes waiting for a model response are accumulated and dispatched to vLLM as a single batch for throughput
3. **Tool call parsing**: Model outputs are parsed for tool calls (Hermes-style XML, raw JSON, or bare-answer fallback)
4. **Forced termination**: Episodes exceeding `MAX_EPISODE_STEPS` (default 20) get a forced `submit_answer("unknown")`
The flow for a single episode within the rollout engine:
```python
# 1. Start an episode on an idle sandbox
async def start_episode(env_idx: int) -> ActiveEpisode:
env = envs[env_idx]
await env.reset()
state = await env._send_and_receive({"type": "state"})
question = state["data"]["current_question"]
company = state["data"]["current_company"]
chat_history = [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": f"Company: {company}\nQuestion: {question}"},
]
return ActiveEpisode(env=env, sandbox_idx=env_idx,
chat_history=chat_history, ...)
# 2. Build prompt and generate with vLLM (batched across all ready episodes)
prompt_str = build_chat_prompt(tokenizer, ep.chat_history)
outputs = vllm_model.generate(prompts=prompts, sampling_params=params)
# 3. Parse tool call from generated text
tool_name, tool_args = parse_tool_call(generated_text)
# 4. Execute in the sandbox
result = await ep.env.step(CallToolAction(tool_name=tool_name, arguments=tool_args))
# 5. If not done: append to chat history, re-enter ready queue
# If done or max steps: capture reward, mark sandbox as idle
```
The `parse_tool_call` function handles multiple output formats from the model:
```python
def parse_tool_call(text: str) -> tuple[str, dict]:
# Pattern 1: Hermes-style XML
# {"name": "sql_query", "arguments": {"query": "..."}}
m = re.search(r"\s*(\{.*?\})\s*", text, re.DOTALL)
if m:
data = json.loads(m.group(1))
# ... extract name and arguments
# Pattern 2: Raw JSON objects
for data in iter_json_objects(text):
# ... try to extract from {"name": ..., "arguments": ...} format
# Pattern 3: Bare answer after tag
# Pattern 4: Text that looks like a number/short answer
# Fallback: submit_answer({"answer": "unknown"})
```
#### Stale WebSocket Cleanup After Cancellation
When rollout collection reaches its target episode count, it cancels any in-flight tasks (episode starts, step requests, forced terminations). But cancellation creates a subtle problem: cancelled tasks leave stale responses queued on their WebSocket connections. If the next iteration reuses that socket, a step response could pick up a stale message from a cancelled task, corrupting the episode.
To prevent this, the code tracks which envs had in-flight WebSocket requests at cancellation time, then force-disconnects those specific sockets. The next `reconnect_envs()` call reopens them cleanly:
```python
# Cancel excess in-flight work once target sample count is reached
pending_cancel = list(start_tasks.keys()) + list(step_tasks.keys()) + list(force_tasks.keys())
# Track envs with in-flight WS requests — cancellation leaves stale
# responses queued on the socket, corrupting subsequent communication
stale_env_indices = set()
for env_idx in start_tasks.values():
stale_env_indices.add(env_idx)
for meta in step_tasks.values():
stale_env_indices.add(meta[0].sandbox_idx)
for fep in force_tasks.values():
stale_env_indices.add(fep.sandbox_idx)
for task in pending_cancel:
task.cancel()
await asyncio.gather(*pending_cancel, return_exceptions=True)
# Force-disconnect envs whose WebSocket has stale responses from
# cancelled tasks. The next reconnect_envs() will reopen them cleanly.
for idx in stale_env_indices:
try:
await envs[idx].disconnect()
except Exception:
pass
```
#### GRPO: Grouping and Advantages
Episodes are grouped by the same question (identified by `question_id`). Groups must be exact size (default 6). Leftover episodes carry over to the next iteration:
```python
def build_strict_prompt_groups(
episodes: list[Episode], group_size: int
) -> tuple[list[list[Episode]], list[Episode]]:
buckets: dict[tuple[str, str], list[Episode]] = defaultdict(list)
for ep in episodes:
buckets[episode_prompt_key(ep)].append(ep)
groups, leftovers = [], []
for bucket in buckets.values():
n_full = len(bucket) // group_size
for i in range(n_full):
groups.append(bucket[i * group_size : (i + 1) * group_size])
leftovers.extend(bucket[n_full * group_size :])
return groups, leftovers
def compute_group_advantages(groups: list[list[Episode]]) -> list[list[float]]:
all_advantages = []
for group in groups:
rewards = np.array([ep.reward for ep in group])
std = float(np.std(rewards))
if len(group) > 1 and std > 1e-8:
mean = float(np.mean(rewards))
advs = (rewards - mean) / (std + 1e-8)
else:
advs = np.zeros_like(rewards) # No gradient signal
all_advantages.append([float(a) for a in advs])
return all_advantages
```
Within each group, advantages are computed as standard GRPO normalization: `(reward - mean) / std`. If all episodes in a group got the same reward, advantages are zero (no gradient signal from that group).
#### GRPO Policy Gradient Update
The update processes each episode's turns as individual training samples. The loss per turn is `-(advantage * policy_logprob)`:
```python
def grpo_update(
train_model, optimizer, episodes_flat, advantages_flat, batch_size=12
) -> float:
train_model.train()
optimizer.zero_grad(set_to_none=True)
# Flatten episodes into turn-level samples, sorted by length
# for efficient padding
for start in range(0, len(turn_samples), batch_size):
chunk = turn_samples[start : start + batch_size]
# Pad and create attention masks
outputs = train_model(input_ids=input_t, attention_mask=attn_mask)
# Extract completion logprobs
nll = F.cross_entropy(completion_logits, comp_targets, reduction="none")
policy_lps = -nll
# GRPO loss: -(advantage * logprob) weighted by token count
token_loss = (-adv_t * policy_lps) * valid_mask
batch_loss = (token_loss * scale_t).sum()
batch_loss.backward()
torch.nn.utils.clip_grad_norm_(train_model.parameters(), max_norm=1.0)
optimizer.step()
```
#### LoRA Hot-Swap
After each training iteration, the updated LoRA adapter is exported and loaded into vLLM for the next rollout. This ensures rollouts always use the freshly updated policy:
```python
def export_lora_adapter(train_model, export_root, iteration) -> str:
out_dir = os.path.join(export_root, f"iter_{iteration:04d}")
train_model.save_pretrained(out_dir)
return out_dir
# In the training loop:
if new_lora_dir:
active_lora_request = lora_request_cls(
f"grpo_iter_{it + 1}", lora_request_seq, new_lora_dir
)
# Future vLLM generations use the new adapter automatically
```
#### GPU Layout and Lag-1 Pipeline
The training uses a 4-GPU setup with clear separation:
- **GPUs 0-1**: vLLM with tensor parallelism (TP=2) for fast batched generation during rollouts
- **GPUs 2-3**: Base model + LoRA with `device_map="auto"` for training
The training loop overlaps iteration N's gradient update (on GPUs 2-3) with iteration N+1's rollout collection (on GPUs 0-1):
```python
for it in range(args.iterations):
batch = prepared_batch
# 1. Start GRPO update on a background thread (uses GPUs 2-3)
update_task = asyncio.create_task(
asyncio.to_thread(
run_grpo_update_and_maybe_export,
train_model, optimizer, batch, ...
)
)
# 2. While train GPUs are busy, prepare next batch (uses GPUs 0-1)
if it + 1 < args.iterations:
prepared_batch = await prepare_train_batch(
envs=envs, pool=pool, vllm_model=vllm_model, ...
)
# 3. Await the update, hot-swap LoRA adapter into vLLM
loss, new_lora_dir = await update_task
if new_lora_dir:
active_lora_request = lora_request_cls(
f"grpo_iter_{it + 1}", lora_request_seq, new_lora_dir
)
```
### 6. Running the Training
:::tip[Open File Limit]
With hundreds of concurrent WebSocket connections, you may hit the default open file descriptor limit. Raise it before starting:
```bash
ulimit -n 65536
```
:::
:::note[Python Dev Headers]
vLLM requires Python development headers for Triton's JIT compilation. Make sure they're installed:
```bash
apt install python3.10-dev
```
:::
Start training with:
```bash
python train.py
```
For a quick smoke test with minimal resources:
```bash
python train.py --sandboxes 2 --iterations 1 --group-size 2
```
You'll see output like:
```
Creating 500 sandboxes from snapshot 'openenv-finqa' ...
All 500 sandboxes ready.
Connecting to sandboxes ...
All 500 connections ready.
Tools: ['get_descriptions', 'get_table_info', 'sql_query', 'submit_answer']
iter accuracy avg_steps loss groups eps/s time
------------------------------------------------------------------------
1/10 0.082 8.3 0.0234 100 12.5 480s
2/10 0.117 7.9 0.0198 100 13.1 458s
...
```
After training completes, artifacts are saved to `runs/YYYYMMDD_HHMMSS/`:
- `config.json` — Full training configuration
- `metrics.jsonl` — Per-iteration metrics (accuracy, loss, eps/sec, etc.)
- `rollouts.jsonl` — Per-round rollout summaries
- `trajectories.jsonl` — Every episode with all turns (tool calls, results, reward)
### 7. Training Results
We ran a full training run with default parameters (`--sandboxes 500 --iterations 10 --group-size 6`), training Qwen3-14B with LoRA on the FinQA task. Each episode presents the model with a financial question about a real company's SEC 10-K filing. Questions like:
- _"What is the year-over-year percentage growth in Total Revenues from fiscal year 2023 to fiscal year 2024?"_ (Walmart)
- _"What is the ratio of Domestic Income to Foreign Income for continuing operations before income taxes in 2022?"_ (Alphabet)
- _"What fraction of the finance lease liability balance is due in the next twelve months?"_ (Alphabet)
- _"What is the net change in gross unrecognized tax benefits for the year ended December 31, 2024?"_ (Alphabet)
The model must use the available tools to discover tables, inspect schemas, run SQL queries, and compute a final answer — receiving a binary reward (1.0 = correct, 0.0 = wrong). Over 10 iterations (~1,000 episodes each), accuracy more than doubled:
#### What Did the Model Actually Learn?
The model did not learn a fundamentally new strategy for navigating financial databases. From iteration 1 onward, it already followed the correct pipeline: `get_descriptions` → `get_table_info` → `sql_query` → `submit_answer`. Average turn count barely changed (4.5 → 4.8). Most of the training's impact came from **two specific behavioral fixes** that removed systematic failure modes, plus a set of subtler improvements in data interpretation.
#### The Year-Column Quoting Fix
This was the single biggest win. The FinQA database uses year strings as column names (`"2024"`, `"2023"`, `"2022"`). In SQLite, writing `SELECT 2024 FROM table` returns the **integer literal** 2024, not the data in column `"2024"`. The correct syntax is `SELECT "2024" FROM table`.
In early training, the model consistently wrote unquoted year columns:
```sql
-- Iteration 1: Unquoted year columns (BROKEN)
SELECT 2024, 2023 FROM us_gaap_ScheduleOfRevenuesFromExternalCustomers...
WHERE operation_type = 'total_revenues'
-- Returns: [{"2024": 2024, "2023": 2023}] ← integer literals, not data!
```
The model received back the same number it asked for, `2024`, instead of the actual revenue figure like `$648,125`. With garbage in, every downstream computation was wrong.
By iteration 8, the model had completely learned to quote:
```sql
-- Iteration 8+: Quoted year columns (CORRECT)
SELECT "2024" AS rev2024, "2023" AS rev2023
FROM us_gaap_ScheduleOfRevenuesFromExternalCustomers...
WHERE operation_type = 'total_revenues'
-- Returns: [{"rev2024": "$648,125", "rev2023": "$611,289"}] ← actual data
```
The adoption curve was dramatic:
| Iteration | Quoted SQL Queries | Integer-Echo Bug Rate | `SELECT *` Rate |
| --------- | ------------------ | --------------------- | --------------- |
| 1 | 22% | 22.4% | 7.8% |
| 3 | 20% | 21.9% | 6.9% |
| 5 | 44% | 8.5% | 4.2% |
| 7 | 94% | 0.0% | 0.6% |
| 8 | 99% | 0.0% | 0.0% |
| 10 | **99.8%** | **0.0%** | **0.0%** |
The year-echo bug directly caused ~28% of all early failures. Its complete elimination accounts for the bulk of the accuracy improvement. Note that `SELECT *` usage (which the environment rejects as "too inefficient") was also fully eliminated by iteration 8, removing another source of wasted turns.
The fix did come with minor collateral damage: the model learned to quote _everything_, and in rare cases where a row value (not a column name) looks like an identifier, the over-quoting produces garbage.
**Concrete example — Walmart revenue growth:**
```
Turn 1: get_descriptions("walmart") → [list of 30 tables]
Turn 2: get_table_info("us_gaap_ScheduleOfRevenues...") → columns: category, operation_type, 2025, 2024, 2023
Turn 3: sql_query("SELECT 2024, 2023 FROM ...ScheduleOfRevenues...
WHERE operation_type = 'total_revenues'")
→ [{"2024": 2024, "2023": 2023}] ← INTEGERS, not revenue!
Turn 4: submit_answer("2024: 2024, 2023: 2023") ← Garbage
```
```
Turn 1: get_descriptions("walmart") → [same tables]
Turn 2: get_table_info("us_gaap_ScheduleOfRevenues...") → [same schema]
Turn 3: sql_query('SELECT "2024" as rev2024, "2023" as rev2023
FROM ...ScheduleOfRevenues...
WHERE operation_type = \'total_revenues\'')
→ [{"rev2024": "$648,125", "rev2023": "$611,289"}] ← Real data
Turn 4: submit_answer("0.0599") ← Correct: (648125-611289)/611289
```
**Concrete example — cascading failure from the year-echo bug (Boeing):**
This trajectory shows how the bug compounds when the model doesn't recover:
```
Turn 1: get_descriptions("boeing") → [table list]
Turn 2: get_table_info("us_gaap_ScheduleOf...") → Error: table not found
Turn 3: get_table_info("ba_ScheduleOf...") → columns: item, 2024, 2023 ← Found via company prefix
Turn 4: sql_query("SELECT * FROM ...") → Error: SELECT * is not allowed ← Wasted turn
Turn 5: sql_query("SELECT 2023 FROM ... WHERE item = '...'") → [{"2023": 2023}] ← Year echo
Turn 6: sql_query("SELECT 2023 FROM ... WHERE item = '...'") → [{"2023": 2023}] ← Year echo again
Turn 7: sql_query("SELECT 2023 FROM ... WHERE item = '...'") → [{"2023": 2023}] ← Year echo again
Turn 8: submit_answer("0.0") → reward=0.0 ← Out of turns, gives up
```
The model hit the table-name error (turn 2), recovered by trying a company-specific prefix (turn 3), wasted a turn on `SELECT *` (turn 4), then repeated the unquoted year query three times (turns 5-7) — never learning within the episode that it needed quotes. All 8 turns exhausted with no useful data retrieved. After training, this entire failure pattern disappears.
#### Numeric Answer Formatting
Early on, the model frequently submitted raw SQL results, multi-value strings, or narrative text instead of a single computed number:
```
Early answers: "31586: 30582" "2024: 2024, 2023: 2023" "22935,24862" "unknown"
Late answers: "0.0328" "0.0599" "0.0843" "0.3385"
```
| Iteration | Clean Numeric Answer | Multi-Value / Mixed | Unknown |
| --------- | -------------------- | ------------------- | ------- |
| 1 | 56% | 42% | 2.0% |
| 5 | 80% | 18% | 1.8% |
| 10 | **93%** | 5% | 1.4% |
This mattered enormously because the reward function expects a single value. Multi-value submissions like `"31586: 30582"` were always graded as wrong even when the underlying data was correct. The model learned through GRPO that the rewarded behavior is to _compute_ the answer (e.g., percentage change, ratio) and submit a single decimal.
This is a net-positive learning as it fixes the vast majority of questions (~90%) which ask for a single ratio or percentage, but it's a blunt instrument. The model internalized "single decimal = reward" without learning "...except when the question asks for multiple values." This caused a few regressions: one Alphabet question asks for non-operating income across three years. Early on, the model answered `"2022: -3514, 2023: 1424, 2024: 7425"` (correct multi-value format, rewarded). After training hammered in the single-decimal habit, it tried to compress three years into one number and failed. The same learning that fixed 90% of answer formatting broke the ~5% of questions that legitimately need multi-value answers.
**Concrete example — raw data dump to computed ratio (GM):**
```
Turn 3: sql_query("SELECT december_31_2024, december_31_2023
FROM ...LessorOperatingLease...
WHERE line_item = 'leased_vehicles,_net'")
→ [{"december_31_2024": "$31,586", "december_31_2023": "$30,582"}]
Turn 4: submit_answer("31586: 30582") ← Dumps both values
```
```
Turn 3: sql_query('SELECT "december_31_2024", "december_31_2023"
FROM ...LessorOperatingLease...
WHERE line_item = \'leased_vehicles,_net\'')
→ [{"december_31_2024": "$31,586", "december_31_2023": "$30,582"}]
Turn 4: submit_answer("0.0328") ← Computes (31586-30582)/30582
```
#### Parenthetical Negative Notation
SEC filings use `$(X)` to denote negative values (accounting convention). Early on, the model missed the negative sign:
**Concrete example — Ford, return on plan assets ratio:**
```
Turn 3: sql_query(...) → return_on_assets = "$(6)", fair_value = "$9"
Turn 4: submit_answer("0.6666666666666666") ← Positive! Missed the $(6) = -6
```
```
Turn 3: sql_query(...) → return_on_assets = "$(6)", fair_value = "$9"
Turn 4: submit_answer("-0.6666666666666666") ← Negative! Correctly interprets $(6) as -6
```
The model learned that parenthetical dollar amounts like `$(6)` represent negative values, a domain-specific convention that required RL signal to internalize.
#### Adaptive Error Recovery
Later iterations show the model recovering from failed queries rather than getting stuck.
**Concrete example — empty results, retry with modified filter (Microsoft):**
```
Turn 3: sql_query(WHERE expense_type='interest...' AND expense_type='total...')
→ [] ← Empty! (impossible AND)
Turn 4: sql_query(WHERE expense_type='interest_on_lease_liabilities')
→ [{"year_ended_june_30_2022": "$429"}] ← Split query works
Turn 5: sql_query(WHERE expense_type='total_finance_lease_cost')
→ [{"year_ended_june_30_2022": "$1,409"}]
Turn 6: submit_answer("0.304") ← Correct: 429/1409
```
**Concrete example — blank row labels for totals (Caterpillar):**
```
Turn 3: sql_query(WHERE component='u.s.') → [{"2022": "$2,962"}] ← Got numerator
Turn 4: sql_query(WHERE component='total') → [] ← Empty! No "total" row
Turn 5: sql_query(WHERE component='total') → [] ← Retries, still empty
Turn 6: sql_query(WHERE component='') → [{"2022": "$8,752"}] ← Finds total in blank row!
Turn 7: submit_answer("0.3385") ← Correct: 2962/8752
```
The model learned that some SEC filing tables use blank row labels for totals, a dataset-specific convention it discovered through trial and error across training.
#### Shortcut Arithmetic
The model learned to identify questions where all numeric values are embedded directly in the question text, and skip database exploration entirely:
```
Question: "What is the equity-to-asset ratio computed as net acquired assets including
goodwill (9,638 million USD) divided by total assets (20,461 million USD)?"
Turn 1: submit_answer("0.4708") ← Computed 9638/20461 immediately, no tools needed
```
There are ~130 such 1-turn successes in late training (99.4% accuracy), representing an efficient learned optimization. The model recognizes when it has enough information to answer without exploring the database.
#### SQL Quality
The overall SQL error rate dropped from 8.3% to 0.5% across training:
| Iteration | SQL Calls | Errors | Error Rate |
| --------- | --------- | ------ | ---------- |
| 1 | 2,462 | 204 | 8.3% |
| 3 | 1,630 | 113 | 6.9% |
| 5 | 1,506 | 67 | 4.4% |
| 7 | 1,515 | 23 | 1.5% |
| 10 | 1,814 | 9 | **0.5%** |
The dominant error source was `SELECT *` (634 of 708 total errors, 89.5%), which the environment blocks as "too inefficient." This was completely eliminated by iteration 8.
Interestingly, the model's SQL queries became _simpler_ over training. Usage of aggregations (`SUM`, `COUNT`, etc.) and `CAST` operations declined, while `LIMIT` usage increased. The model learned that targeted, simple queries with precise WHERE clauses are more reliable than complex aggregations, a reasonable strategy given the table structures.
#### What Still Fails
Even at 52% accuracy, nearly half the episodes fail. The nature of failures shifted dramatically over training:
**Early failures**: Dominated by the year-echo bug (28%), raw data dumps (20%), and `SELECT *` errors (8%). These are _systematic_ bugs that affect almost every question.
**Late failures**: Almost entirely **"has correct data, computes wrong answer"**. The model retrieves the right numbers from the right table but produces an incorrect final answer. The information retrieval problem is solved; the arithmetic/interpretation problem remains.
**"Net change" vs "percentage change" confusion** — the most systematic remaining failure. When a question asks for "net change" or "absolute change" (expecting a dollar amount), the model computes a percentage instead:
```
Question: "What is the net change in unrecognized tax benefits from Dec 31, 2021 to Dec 31, 2022?"
Data retrieved: Dec 2021 = $531M, Dec 2022 = $870M
Expected answer: 870 - 531 = 339 (absolute difference)
Model submits: 0.635 (percentage change: (870-531)/531)
```
All 23 attempts on this question across late training submit exactly `0.635`; the model has converged to a consistent-but-wrong policy. The GRPO training signal pushed the model toward "always output a ratio/percentage," which is correct for ~80% of questions but wrong for absolute-change questions. Hence, the model found a mode that works for most questions and cannot escape it.
**Sign errors on decreasing values** — when a value decreases year-over-year, the percentage change should be negative. The model frequently submits the absolute value:
```
Question: "Percentage change in Life and Health premiums from 2023 to 2024?"
Data: 2024 = $5,007, 2023 = $5,093
Correct: (5007-5093)/5093 = -0.0169
Model submits: 0.0169 (positive — wrong sign)
```
This error is perfectly consistent across all 19 attempts on this question, showing a systematic blind spot rather than random error.
#### Summary
1. **The model's exploration strategy was already correct from iteration 1.** The `get_descriptions` → `get_table_info` → `sql_query` → `submit_answer` pipeline was established from the start. GRPO did not need to teach the model _how_ to use tools.
2. **Two specific behavioral fixes drove most of the improvement:** quoting year-string column names in SQL (22% → 99.8% adoption) and submitting single numeric answers instead of raw data (56% → 93%).
3. **The remaining gains came from subtler improvements:** parsing parenthetical negatives as negative values, recovering from empty query results, and learning company/table-specific conventions (like blank-row totals).
4. **The residual error is an arithmetic/interpretation problem, not an information retrieval problem.** The vast majority of late failures have the correct data but compute the wrong answer.
5. **GRPO's group-based advantage signal was effective at eliminating systematic bugs** (year-echo, `SELECT *`) but insufficient to escape local optima for answer formatting (percentage vs absolute change). Longer training or a more nuanced reward signal might address the remaining failures.
### 8. Training Configuration
The `train.py` script accepts the following command-line arguments:
| Parameter | Default | Description |
| -------------------------- | ---------------- | ---------------------------------------------- |
| `--sandboxes` | 500 | Number of concurrent Daytona sandboxes |
| `--iterations` | 10 | Training iterations |
| `--group-size` | 6 | Episodes per prompt group for GRPO |
| `--target-groups-per-iter` | 100 | Target number of complete groups per iteration |
| `--max-rollout-rounds` | 8 | Max rollout rounds per iteration |
| `--snapshot` | `openenv-finqa` | Daytona snapshot name |
| `--model` | `Qwen/Qwen3-14B` | HuggingFace model ID |
| `--lr` | 8e-5 | Learning rate |
| `--temperature` | 1.0 | Sampling temperature |
| `--max-steps` | 20 | Max episode steps before forced termination |
| `--max-gen-tokens` | 512 | Max tokens per generation |
| `--tensor-parallel-size` | 2 | vLLM tensor parallelism |
| `--gpu-memory-utilization` | 0.85 | vLLM GPU memory fraction |
| `--lora-rank` | 16 | LoRA rank |
| `--lora-alpha` | 32 | LoRA alpha |
| `--sync-every` | 1 | Export LoRA adapter every N iterations |
| `--grpo-update-batch-size` | 12 | Micro-batch size for GRPO updates |
:::tip[Scaling Tips]
- For a quick smoke test, use `--sandboxes 2 --iterations 1 --group-size 2`
- Increase `--max-steps` for questions that require more exploration
- Decrease `--grpo-update-batch-size` if you run out of GPU memory during the update step
- Keep `--group-size` at 6+ for meaningful advantage estimation within groups
:::
---
**Key advantages of this approach:**
- **Multi-turn tool use**: Agents learn to iteratively explore and query financial data across multiple steps
- **Massive parallelism**: Hundreds of sandboxes collect episodes simultaneously
- **Safe execution**: SQL queries and data exploration execute in isolated environments
- **OpenEnv protocol**: Standard RL environment interface over WebSocket, decoupling the environment from the agent
import { Image } from 'astro:assets'
import rewardsPlot from '../../../../../assets/docs/images/trl-grpo-rewards-plot.png'
This guide demonstrates how to use Daytona sandboxes to safely execute hundreds of code completions in parallel during reinforcement learning training.
We use [TRL](https://huggingface.co/docs/trl/)'s GRPOTrainer together with 500 Daytona sandboxes evaluating completions concurrently, in order to train the `Qwen3-1.7B-Base` model on some basic code-writing tasks.
---
### 1. Workflow Overview
This guide presents a simple, self-contained script that performs reinforcement learning training of `Qwen3-1.7B-Base`. In particular, we use **reinforcement learning with verifiable rewards**, with the reward being computed from the test pass rate of model-written functions.
The training loop consists of following steps:
1. **Generate**: The model produces many code completions for each prompt (e.g., 250 completions per prompt per step)
2. **Evaluate**: Each completion runs in its own Daytona sandbox against a test suite
3. **Reward**: Completions that pass more tests get higher rewards; errors or banned patterns get negative rewards
4. **Update**: GRPO reinforces completions that scored above their group average
The evaluation step happens in parallel across all 500 sandboxes. The sandboxes are spawned once at the start of the training and reused throughout it, and cleaned up after the training completes.
### 2. Setup
#### Clone the Repository
:::note[GPU Requirement]
This guide is written to run on a single 80GB VRAM GPU. If you want to run it on a GPU with less VRAM, you can decrease `per_device_train_batch_size` parameter, possibly increasing `gradient_accumulation_steps` proportionally if you wish to keep effective batch size at 500.
:::
Clone the [Daytona repository](https://github.com/daytonaio/daytona.git) and navigate to the example directory:
```bash
git clone https://github.com/daytonaio/daytona.git
cd daytona/guides/python/reinforcement-learning/trl
```
#### Create Virtual Environment
:::note[Python Version]
Python 3.10 or higher is required. A GPU with 80GB+ VRAM is recommended for training.
:::
```bash
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
```
#### Install Dependencies
```bash
pip install -e .
```
This installs:
- `daytona` - Daytona SDK for sandbox management
- `trl[vllm]` - TRL with vLLM integration for fast inference
- `datasets` - HuggingFace datasets library
- `python-dotenv` - Environment variable management
#### Configure Environment
Get your Daytona API key from the [Daytona Dashboard](https://app.daytona.io/dashboard/keys) and create a `.env` file:
```bash
DAYTONA_API_KEY=your_daytona_api_key
```
### 3. Understanding the Code
Let's walk through the key components of the training script.
#### Task Definitions
The script defines coding tasks as prompts with test cases. Note that the prompts are written in completion mode rather than QA mode because `Qwen3-1.7B-Base` is a base rather than an instruct model. Each task specifies what the model should generate and how to verify correctness:
```python
SORTING_PROMPT = """# I've been fiddling with different ways to sort numbers in Python.
# At first I just used sorted() and list.sort(), but then I decided to try
# my hand at writing some original sorting functions. And I succeeded!
# I don't call sorted(), list.sort(), heapq, or use any imports here - just plain
# Python and an original algorithm.
def sort_numbers(xs: list[int]) -> list[int]:
\"\"\"Sort a list of integers in ascending order.
Args:
xs: A list of integers to be sorted.
Returns:
A new list containing the same integers, sorted from smallest to largest.
\"\"\"
"""
TASKS = {
"sorting": {
"prompt": SORTING_PROMPT,
"func_name": "sort_numbers",
"banned_patterns": ["sorted(", ".sort(", "heapq", "import ", "__import__"],
"tests": [
"[]",
"[1, 3, 2]",
"[random.randint(-1000, 1000) for _ in range(200)]",
"[random.randint(-100, 100) for _ in range(1000)]",
"list(range(0, 100)) + list(range(200, 100, -1)) + list(range(200, 300))",
],
"reference": "sorted",
},
# Additional tasks can be added here...
}
```
Each task includes:
- **prompt**: The code context the model continues from
- **func_name**: The function name being implemented
- **banned_patterns**: Patterns that disqualify a completion (e.g., using built-in `sorted()`)
- **tests**: Test inputs to verify correctness
- **reference**: The reference implementation to compare against
#### How Prompts Become Completions
When the model receives the sorting prompt, it continues the text as if completing a Python file. A typical model output might look like:
```
if len(xs) <= 1:
return xs
pivot = xs[len(xs) // 2]
left = [x for x in xs if x < pivot]
middle = [x for x in xs if x == pivot]
right = [x for x in xs if x > pivot]
return sort_numbers(left) + middle + sort_numbers(right)
# Example usage:
print(sort_numbers([3, 1, 4, 1, 5, 9, 2, 6]))
```
Notice the model generates the indented function body, but may also add extra content after (comments, example usage, etc.). The `sanitize_completion` function extracts only the indented lines that form the function body:
```python
def sanitize_completion(text: str) -> str:
# Take lines until the first unindented line
lines = text.splitlines()
kept: List[str] = []
for line in lines:
if line and (not line.startswith(" ")):
break
kept.append(line)
return "\n".join(kept).rstrip()
```
After sanitization, the example above becomes just the function body:
```python
if len(xs) <= 1:
return xs
pivot = xs[len(xs) // 2]
left = [x for x in xs if x < pivot]
middle = [x for x in xs if x == pivot]
right = [x for x in xs if x > pivot]
return sort_numbers(left) + middle + sort_numbers(right)
```
#### Sandbox Pool Management
We create the sandbox pool upfront and reuse sandboxes throughout training:
```python
EFFECTIVE_BATCH_SIZE = 500
# We evaluate each completion concurrently, in its own sandbox,
# so we spawn EFFECTIVE_BATCH_SIZE number of sandboxes.
async def _create_sandbox_pool_async(
daytona: AsyncDaytona, n: int = 10
) -> List[AsyncSandbox]:
print(f"Creating {n} sandboxes...")
tasks = [daytona.create() for _ in range(n)]
sandboxes = await asyncio.gather(*tasks)
print(f"Successfully created all {len(sandboxes)} sandboxes")
return list(sandboxes)
async def _cleanup_sandbox_pool_async(sandbox_pool: List[AsyncSandbox]) -> None:
if not sandbox_pool:
return
print("Cleaning up sandboxes...")
tasks = [sandbox.delete() for sandbox in sandbox_pool]
results = await asyncio.gather(*tasks, return_exceptions=True)
for r in results:
if isinstance(r, Exception):
print(f" Sandbox delete error: {type(r).__name__}: {r}")
print("All sandboxes cleaned up")
```
The pool size (500) is chosen to match the total batch size (`per_device_train_batch_size * gradient_accumulation_steps`), ensuring every completion in a batch can be evaluated in parallel.
#### Code Evaluation
The main evaluation function ties everything together - it sanitizes the completion, checks for banned patterns, builds the test harness, executes it in a sandbox, and parses the results:
```python
async def evaluate_single_completion_async(
sandbox: AsyncSandbox,
raw_completion: str,
prompt: str,
) -> EvalResult:
task = PROMPT_TO_TASK[prompt]
num_task_tests = len(task["tests"])
body = sanitize_completion(raw_completion)
if not body.strip():
return _fail_result(num_task_tests)
if has_banned_pattern(body, task):
return _fail_result(num_task_tests)
code = build_test_harness(task, body)
try:
response = await sandbox.code_interpreter.run_code(
code, timeout=MAX_TIMEOUT_SECONDS
)
except DaytonaTimeoutError:
print(
f"Completion timed out after {MAX_TIMEOUT_SECONDS}s "
f"in sandbox {getattr(sandbox, 'id', '?')}"
)
return _fail_result(num_task_tests)
except Exception as e:
print(
f"Error evaluating completion in sandbox {getattr(sandbox, 'id', '?')}: "
f"{type(e).__name__}: {e}",
)
return _fail_result(num_task_tests)
if response.error is not None:
return _fail_result(num_task_tests)
raw_output = response.stdout.strip()
if not raw_output:
return _fail_result(num_task_tests)
last_line = raw_output.splitlines()[-1]
try:
results = json.loads(last_line)
except Exception:
return _fail_result(num_task_tests)
correct = results.get("results", [])
return {
"no_error": True,
"num_passed": sum(bool(x) for x in correct),
"num_tests": len(correct),
}
```
#### The Test Harness
The `build_test_harness` function combines the original prompt, the model's completion, and a test runner into Python code that ultimately executes on the sandbox:
```python
def build_test_harness(task: Dict[str, Any], function_body: str) -> str:
prompt = task["prompt"]
func_name = task["func_name"]
reference_function = task["reference"]
tests = task["tests"]
tests_tuple = ",\n ".join(tests)
return f"""{prompt}
{function_body}
import json
import random
random.seed(0)
def _kadane(xs):
max_sum = current = xs[0]
for x in xs[1:]:
current = max(x, current + x)
max_sum = max(max_sum, current)
return max_sum
def _run_tests():
tests = (
{tests_tuple}
)
results = []
for xs in tests:
try:
out = {func_name}(xs.copy())
expected = {reference_function}(xs.copy())
results.append(out == expected)
except Exception:
results.append(False)
print(json.dumps({{"results": results}}))
if __name__ == "__main__":
_run_tests()
"""
```
For the sorting task with a quicksort completion, the assembled code looks like:
```python
# I've been fiddling with different ways to sort numbers in Python...
def sort_numbers(xs: list[int]) -> list[int]:
"""Sort a list of integers in ascending order..."""
if len(xs) <= 1:
return xs
pivot = xs[len(xs) // 2]
left = [x for x in xs if x < pivot]
middle = [x for x in xs if x == pivot]
right = [x for x in xs if x > pivot]
return sort_numbers(left) + middle + sort_numbers(right)
import json
import random
random.seed(0)
def _run_tests():
tests = (
[],
[1, 3, 2],
[random.randint(-1000, 1000) for _ in range(200)],
# ... more tests
)
results = []
for xs in tests:
try:
out = sort_numbers(xs.copy())
expected = sorted(xs.copy())
results.append(out == expected)
except Exception:
results.append(False)
print(json.dumps({"results": results}))
if __name__ == "__main__":
_run_tests()
```
When executed in the sandbox, this prints JSON to stdout:
```json
{"results": [true, true, true, false, true]}
```
The evaluation function parses this JSON to count how many tests passed.
#### Banned Pattern Detection
Before running code in the sandbox, we check for banned patterns. This prevents the model from "cheating" by using built-in functions:
```python
def has_banned_pattern(text: str, task: Dict[str, Any]) -> bool:
banned = task.get("banned_patterns", [])
if not banned:
return False
lowered = text.lower()
return any(p.lower() in lowered for p in banned)
```
For the sorting task, banned patterns include `sorted(`, `.sort(`, `heapq`, and `import`. If the model generates `return sorted(xs)`, it gets a reward of -1.0 instead of being executed - we want the model to learn to write actual sorting algorithms, not to call built-in functions.
#### Parallel Batch Evaluation
The batch evaluator distributes completions across the sandbox pool:
```python
async def _evaluate_batch_async(
sandbox_pool: List[AsyncSandbox], completions: List[str], prompts: List[str]
) -> List[EvalResult]:
print(
f"Evaluating {len(completions)} completions in parallel across "
f"{len(sandbox_pool)} sandboxes..."
)
async def run_one(
i: int, sandbox: AsyncSandbox, completion: str, prompt: str
) -> EvalResult:
task = PROMPT_TO_TASK[prompt]
num_task_tests = len(task["tests"])
try:
stats = await evaluate_single_completion_async(sandbox, completion, prompt)
print(f" Completion {i + 1}/{len(completions)} done")
return stats
except Exception as e:
print(
f" Completion {i + 1}/{len(completions)} failed: "
f"{type(e).__name__}: {e}"
)
return _fail_result(num_task_tests)
tasks = [
run_one(i, sandbox_pool[i % len(sandbox_pool)], completion, prompt)
for i, (completion, prompt) in enumerate(zip(completions, prompts))
]
stats_list = await asyncio.gather(*tasks)
print(f" Done: {len(completions)}/{len(completions)} completions evaluated")
return stats_list
```
Each completion is assigned to a sandbox using round-robin distribution (`i % len(sandbox_pool)`), ensuring even load distribution.
#### Reward Function
The reward function receives the results from the sandboxes and computes the corresponding scalar reward.
```python
def reward_func(prompts, completions, **kwargs):
stats_list = run_async(
_evaluate_batch_async(sandbox_pool, completions, prompts)
)
rewards = []
for s in stats_list:
if not s["no_error"]:
rewards.append(-1.0)
elif s["num_tests"] == 0:
rewards.append(0.0)
else:
rewards.append(s["num_passed"] / s["num_tests"])
return rewards
```
The reward scheme:
- **-1.0**: Error, timeout, or banned pattern detected
- **0.0**: No tests were present (shouldn't happen with valid tasks)
- **0.0 to 1.0**: Fraction of tests passed
#### Bridging Sync and Async
TRL's `GRPOTrainer` expects a synchronous reward function, but the Daytona SDK uses async/await for parallel sandbox operations. We bridge these two worlds with a helper:
```python
def main():
# Create a dedicated event loop for async operations
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
def run_async(coro: Awaitable[Any]) -> Any:
"""Run async code from sync context."""
return loop.run_until_complete(coro)
# ... training code ...
def reward_func(prompts, completions, **kwargs):
# This sync function is called by TRL
# We use run_async to call our async evaluation
stats_list = run_async(
_evaluate_batch_async(sandbox_pool, completions, prompts)
)
# ... compute rewards ...
return rewards
```
This pattern lets us keep the async parallelism benefits of the Daytona SDK while working within TRL's synchronous training loop. The `run_async` helper blocks until all 500 parallel sandbox evaluations complete, then returns the results.
#### Training Configuration
The GRPO trainer is configured with these parameters:
```python
training_args = GRPOConfig(
output_dir="training_results",
per_device_train_batch_size=20,
# batch size chosen so the training runs comfortably on a single 80GB GPU,
# if running this on a GPU with less memory, reduce the batch size accordingly
gradient_accumulation_steps=25,
num_generations=EFFECTIVE_BATCH_SIZE // len(TASKS),
max_prompt_length=256,
max_completion_length=512,
learning_rate=8e-6,
num_train_epochs=1,
logging_steps=1,
report_to="none",
max_steps=8,
bf16=True,
use_vllm=True,
vllm_mode="colocate",
vllm_gpu_memory_utilization=0.15,
gradient_checkpointing=True,
loss_type="dapo",
beta=0.01,
)
```
Key settings explained:
**Batch size and sandbox pool alignment:**
```
per_device_train_batch_size (20) × gradient_accumulation_steps (25) = 500
```
This equals `EFFECTIVE_BATCH_SIZE`. Each training step generates exactly 500 completions, and we have exactly 500 sandboxes - so every completion evaluates in parallel with no waiting. If we had fewer sandboxes, some completions would queue up. If we had more, sandboxes would sit idle.
**vLLM colocate mode:**
```python
use_vllm=True,
vllm_mode="colocate",
vllm_gpu_memory_utilization=0.15,
```
This runs vLLM for fast inference on the same GPU as training. We use 15% of the GPU's memory for model generation, and the rest for training (optimizer states).
**Generation settings:**
- `num_generations=EFFECTIVE_BATCH_SIZE // len(TASKS)`: Generate 250 completions per prompt (500 / 2 tasks). With 2 prompts (sorting and max_subarray), that's 500 total per step
- `max_completion_length=512`: Limit completion length to prevent runaway generation
### 4. Running the Training
Start training with:
```bash
python train.py
```
You'll see output like:
```
Creating 500 sandboxes...
Successfully created all 500 sandboxes
Evaluating 500 completions in parallel across 500 sandboxes...
Completion 1/500 done
Completion 2/500 done
...
Done: 500/500 completions evaluated
```
After training completes, metrics are saved to `training_results/metrics.jsonl` and the model is saved as `training_results/checkpoint-8`.
### 5. Example Evaluation Walkthrough
Let's trace through what happens when evaluating a single completion:
**Step 1: Model generates a completion**
The model receives the sorting prompt and generates:
```
if len(xs) <= 1:
return xs
pivot = xs[0]
less = [x for x in xs[1:] if x <= pivot]
greater = [x for x in xs[1:] if x > pivot]
return sort_numbers(less) + [pivot] + sort_numbers(greater)
# Test
print(sort_numbers([3, 1, 2]))
```
**Step 2: Sanitization extracts the function body**
`sanitize_completion` keeps only the indented lines:
```python
if len(xs) <= 1:
return xs
pivot = xs[0]
less = [x for x in xs[1:] if x <= pivot]
greater = [x for x in xs[1:] if x > pivot]
return sort_numbers(less) + [pivot] + sort_numbers(greater)
```
**Step 3: Check for banned patterns**
`has_banned_pattern` scans for `sorted(`, `.sort(`, `heapq`, `import`. None found, so we proceed.
**Step 4: Build the test harness**
`build_test_harness` assembles the full script: prompt + completion + test runner. This becomes ~50 lines of executable Python.
**Step 5: Execute in sandbox**
```python
response = await sandbox.code_interpreter.run_code(code, timeout=1)
```
The sandbox runs the code and returns within the 1-second timeout.
**Step 6: Parse results**
The test runner printed:
```json
{"results": [true, true, true, true, true]}
```
We parse this from `response.stdout`:
```python
results = json.loads(response.stdout.strip().splitlines()[-1])
# {"results": [true, true, true, true, true]}
```
**Step 7: Compute reward**
All 5 tests passed:
```python
reward = 5 / 5 # = 1.0
```
This completion gets a perfect reward of 1.0, reinforcing the model to generate similar quicksort implementations.
### 6. Training Results
The plot below shows average rewards over training steps. At the start, the model is very rarely writing functions that meet the task specifications, and it is often writing code that either errors out or times out. Given our large effective batch size of 500, the model achieves near-perfect performance after only 8 steps.
### 7. Adding Custom Tasks
To add a new coding task, extend the `TASKS` dictionary:
```python
TASKS = {
"your_task": {
"prompt": "Your prompt here...",
"func_name": "function_name",
"banned_patterns": ["patterns", "to", "ban"],
"tests": [
"test_input_1",
"test_input_2",
],
"reference": "reference_function",
},
}
```
The reference function should be defined in the test harness that `build_test_harness` generates.
### 8. Configuration Options
| Parameter | Default | Description |
|-----------|---------|-------------|
| `EFFECTIVE_BATCH_SIZE` | 500 | Effective batch size, also equal to the number of parallel sandboxes |
| `MAX_TIMEOUT_SECONDS` | 1 | Timeout per code execution |
| `MODEL_NAME` | `Qwen/Qwen3-1.7B-Base` | Base model to train |
:::tip[Scaling Tips]
- Keep `per_device_train_batch_size * gradient_accumulation_steps` equal to `EFFECTIVE_BATCH_SIZE` for optimal parallelism
- Increase `MAX_TIMEOUT_SECONDS` for tasks with more (algorithmically) complex test cases
:::
---
**Key advantages of this approach:**
- **Massive parallelism**: 500 sandboxes evaluate completions simultaneously
- **Safe execution**: Generated code runs in isolated environments, protecting your system
- **Fast feedback**: vLLM + parallel evaluation minimizes training iteration time
- **Extensible**: Add new coding tasks by defining prompts and test cases
import { Image } from 'astro:assets'
import daytonaThroughputPlot from '../../../../../assets/docs/images/verl-retool-daytona-throughput.svg'
import verlToolFigure from '../../../../../assets/docs/images/verltool-figure-2-table-2.png'
This guide demonstrates how to run [veRL's](https://github.com/verl-project/verl) ReTool recipe with Daytona sandboxes as the tool execution backend, scaling up to hundreds of tool calls per training step without hitting a concurrency ceiling.
---
### 1. Overview
[veRL](https://github.com/verl-project/verl) is a distributed RL post-training framework for LLMs. The [ReTool](https://arxiv.org/abs/2504.11536v1) recipe trains models to solve math problems by writing and executing Python code across multi-turn interactions.
During each training step, the model generates responses and writes Python code to verify intermediate computations. veRL's agent loop manages the sandbox lifecycle per trajectory:
1. **`create()`** — A sandbox is created for the trajectory (one per trajectory, reused across turns)
2. **`execute()`** — The model's code runs inside the sandbox and the result is returned
3. The model reads the result and continues generating, possibly calling the tool again
4. **`release()`** — The sandbox is deleted when the trajectory ends
Multiple trajectories run concurrently, each with its own isolated sandbox. The reward signal comes from final answer correctness, and the RL trainer reinforces trajectories where the model used the code interpreter effectively.
### 2. The Problem: Tool Execution Bottlenecks Rollout Speed
Tool execution typically dominates multi-turn RL rollout time. [VerlTool](https://arxiv.org/abs/2509.01055) shows the effect directly: trajectory-level asynchronous execution speeds up rollout time by **1.32x** on Math-TIR, **1.22x** on SQL, and **1.97x** on DeepSearch.
These speedups depend on the tool backend keeping pace with parallel requests. The GPU will sit idle if tool execution stalls.
### 3. Daytona as the ReTool Backend
By executing tool calls on Daytona sandboxes, the async rollout pipeline can scale to hundreds of concurrent executions without saturating the backend.
- **No per-instance concurrency ceiling.** A single API endpoint handles hundreds of concurrent sandbox operations, removing the need to deploy multiple instances to scale.
- **Fast parallel creation.** Hundreds of sandboxes are created in sub-second time at rollout start and reused for all tool calls in a trajectory.
- **Async SDK.** The `AsyncDaytona` client integrates directly with veRL's async rollout workers. Workers fire requests in parallel and process results as they arrive.
- **Automatic cleanup.** Sandboxes that fail or time out are automatically stopped and deleted, so leaked resources don't accumulate during long training runs.
The chart below compares code execution throughput between Docker containers and Daytona sandboxes.
With Docker containers, throughput plateaus as concurrency increases. Container startup overhead dominates, and adding more parallelism doesn't help. Daytona sandboxes scale linearly and reach **98 calls/sec** at 128 concurrent — a **5.5x throughput improvement** at peak concurrency.
[Reproduce these results →](#benchmark-script)
### 4. Setup
#### Clone veRL and Initialize the Recipe Submodule
```bash
git clone https://github.com/verl-project/verl.git
cd verl
git submodule update --init --recursive recipe
cd recipe && git pull origin main && cd ..
```
#### Download the Model Checkpoint
The ReTool recipe expects a fine-tuned SFT checkpoint. Download the pre-trained 32B checkpoint from HuggingFace:
```bash
pip install huggingface_hub
huggingface-cli download JoeYing/ReTool-Qwen-32B-SFT --local-dir checkpoint/ReTool-Qwen-32B-SFT
```
See the [ReTool recipe README](https://github.com/verl-project/verl-recipe/tree/main/retool) for SFT data preparation if you want to train your own checkpoint on a different model size.
#### Download the Datasets
```bash
huggingface-cli download BytedTsinghua-SIA/DAPO-Math-17k --repo-type dataset --local-dir dataset/BytedTsinghua-SIA/DAPO-Math-17k
huggingface-cli download yentinglin/aime_2025 --repo-type dataset --local-dir dataset/yentinglin/aime_2025
huggingface-cli download Maxwell-Jia/AIME_2024 --repo-type dataset --local-dir dataset/Maxwell-Jia/AIME_2024
```
#### Create an Environment and Install Dependencies
:::note[Note]
veRL documents Python 3.10+ for installation.
:::
```bash
python3.10 -m venv .venv
source .venv/bin/activate
pip install -e .
pip install daytona
```
#### Export the Daytona API Key
Get your API key from the [Daytona Dashboard](https://app.daytona.io/dashboard/keys) and export it before running the recipe or the benchmark:
```bash
export DAYTONA_API_KEY="your_daytona_api_key"
```
### 5. Start Training
Use the existing ReTool launch script and point it at the Daytona tool config and the downloaded checkpoint:
```bash
TOOL_CFG=recipe/retool/daytona_tool_config.yaml
MODEL=$PWD/checkpoint/ReTool-Qwen-32B-SFT
bash recipe/retool/run_qwen2-32b_dapo.sh \
actor_rollout_ref.model.path=$MODEL \
actor_rollout_ref.rollout.multi_turn.tool_config_path=$TOOL_CFG \
trainer.project_name=retool_daytona \
trainer.experiment_name=qwen2.5-32b_dapo_daytona
```
The dataset, reward function, async rollout mode, and trainer setup stay the same. The only changes are the model path and tool config path.
### Benchmark Script
```bash
git clone https://github.com/daytonaio/daytona.git
cd daytona/guides/python/reinforcement-learning/verl-retool
# Docker containers (baseline — no additional dependencies)
python benchmark_tool_backends.py \
--backend docker \
--concurrency 1 4 8 16 32 64 128
# Daytona sandboxes (requires DAYTONA_API_KEY and veRL)
python benchmark_tool_backends.py \
--backend daytona \
--verl-root /path/to/verl \
--concurrency 1 4 8 16 32 64 128
```
Results are written to `outputs///` as `summary.json` and `results.csv`.
Benchmarked on macOS (Docker Desktop) and Daytona cloud (includes network round-trip). Absolute numbers may vary by environment.
### References
- [ReTool: Reinforcement Learning for Strategic Tool Use in LLMs](https://arxiv.org/abs/2504.11536)
- [VerlTool: Towards Holistic Agentic Reinforcement Learning with Tool Use](https://arxiv.org/abs/2509.01055)
- [veRL](https://github.com/verl-project/verl)
import { Image } from 'astro:assets'
import wealthTrajectories from '../../../../../assets/docs/images/wealth-trajectories.png'
[DSPy](https://dspy.ai/)'s RLM implements [recursive language models](https://arxiv.org/abs/2512.24601), a system where an LLM writes Python code through which it can pass parts of its context to LLM calls, leading to significantly enhanced long-context reasoning.
The generated code runs in a REPL, and in this guide we use and present `DaytonaInterpreter`, which plugs into DSPy as the code-execution backend so that all generated code runs inside an isolated Daytona cloud sandbox rather than on your machine.
---
### 1. Setup
#### Clone the Repository
Clone the [Daytona repository](https://github.com/daytonaio/daytona.git) and navigate to the example directory:
```bash
git clone https://github.com/daytonaio/daytona.git
cd daytona/guides/python/dspy-rlms
```
#### Create Virtual Environment
```bash
python3.10 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
```
#### Install Dependencies
```bash
pip install -e .
```
This installs the DSPy framework and the Daytona SDK. To also run the included demo (which plots results with matplotlib), use
```bash
pip install -e ".[demo]"
```
#### Configure Environment
Create a `.env` file with your API keys:
```bash
cp .env.example .env
# Edit .env with your keys
```
The file needs two variables:
```bash
DAYTONA_API_KEY=your_daytona_api_key
OPENROUTER_API_KEY=your_openrouter_api_key # or OPENAI_API_KEY / ANTHROPIC_API_KEY
```
:::note
Get your Daytona API key from the [Daytona Dashboard](https://app.daytona.io/dashboard/keys). The LLM provider key depends on which model you configure in your code — the examples use OpenRouter.
:::
### 2. Basic Usage
The following example shows the basic setup — configure a model, create a `DaytonaInterpreter`, and pass it to an RLM. The generated code can call `llm_query()` to delegate semantic work back to the LLM:
```python
import dspy
from dotenv import load_dotenv
from daytona_interpreter import DaytonaInterpreter
load_dotenv()
lm = dspy.LM("openrouter/anthropic/claude-sonnet-4.6")
dspy.configure(lm=lm)
interpreter = DaytonaInterpreter()
rlm = dspy.RLM(
signature="documents: list[str], question: str -> answer: str",
interpreter=interpreter,
verbose=True,
)
documents = [...] # your documents
result = rlm(documents=documents, question="Summarize the key findings across these documents.")
print(result.answer)
interpreter.shutdown()
```
Inside the sandbox, the RLM might loop over the documents, call `llm_query()` to summarize each one, then aggregate the results with Python before calling `SUBMIT(answer=...)`.
### 3. Workflow Overview
Each RLM call runs an iterative REPL loop. The LLM writes Python code, the code executes in a Daytona sandbox, and the output is fed back to the LLM for the next iteration. Crucially, the generated code can call `llm_query()` to invoke a sub-LLM call — this is how the LLM delegates semantic work (understanding, extraction, classification) to itself while keeping the orchestration logic in Python.
1. **Prompt** — RLM sends the task inputs and previous turns to the LLM
2. **Code** — The LLM responds with reasoning and a Python code snippet
3. **Execute** — The code runs inside a Daytona sandbox; any `llm_query()` calls are bridged back to the host LLM
4. **Repeat** — Steps 1–3 repeat until the code calls `SUBMIT()` or the iteration limit is reached
#### How Bridging Works
Step 3 above mentions that `llm_query()` calls are "bridged back to the host." Here's a diagram and an explanation of that process:
```
Host Process Daytona Sandbox
┌──────────────────────────────┐ ┌──────────────────────────────┐
│ DaytonaInterpreter │ │ Broker Server (Flask) │
│ │ │ │
│ • polls the broker for │ tool call, │ • accepts requests from │
│ pending requests │ e.g. llm_query │ the wrapper functions │
│ │◄───────────────│ │
│ • calls the LLM API │ │ • queues them for the host │
│ or runs tool functions │ result │ • returns results once the │
│ • posts results back │───────────────►│ host replies │
│ │ │ │
└──────────────────────────────┘ │ Generated Code │
│ │ • llm_query() │
▼ │ • llm_query_batched() │
LLM API │ • custom tool wrappers │
└──────────────────────────────┘
```
When `DaytonaInterpreter` starts, it launches a small Flask broker server inside the sandbox and injects wrapper functions (`llm_query`, `llm_query_batched`, and any custom tools you provide). These wrappers POST requests to the broker and block until a result arrives. On the host side, a polling loop picks up pending requests, executes them (e.g. calls the LLM API or runs your tool function), and posts the results back to the broker. From the generated code's perspective, the wrappers look and behave like ordinary Python functions.
Custom tools passed via the `tools` dict use the same mechanism, so that the host generates a matching wrapper inside the sandbox and bridges calls identically.
State persists across iterations: variables, imports, and function definitions all carry over.
#### Sub-LLM Calls
Two built-in functions are available inside the sandbox:
- **`llm_query(prompt)`** — send a single natural-language prompt to the LLM, get a string back
- **`llm_query_batched(prompts)`** — send multiple prompts concurrently, get a list of strings back
These execute on the host (they need LLM API access) and are bridged into the sandbox. From the generated code's perspective they are ordinary Python functions that take strings and return strings. This is what makes the pattern powerful: the LLM can write a `for` loop over 100 chapters, call `llm_query_batched()` to extract structured data from each one in parallel, then aggregate and use the results with additional Python code.
### 4. Example Walkthrough
The included `demo.py` shows a realistic use of sub-LLM calls: literary analysis of _The Count of Monte Cristo_ — a ~1,300-page novel with 117 chapters — tracking the wealth trajectory of five major characters. The RLM uses `llm_query_batched()` to process chapters in parallel batches, then aggregates the results with Python.
#### How the Demo Works
The script fetches the full novel text from Project Gutenberg, splits it into chapters, and passes them to an RLM configured with a typed signature:
```python
interpreter = DaytonaInterpreter()
rlm = dspy.RLM(
signature="chapters: list[str], task: str -> wealth_data: list[dict]",
interpreter=interpreter,
max_iterations=40,
max_llm_calls=500,
verbose=True,
)
chapters = fetch_chapters()
print(f"Fetched {len(chapters)} chapters")
TASK = (
"Analyze the economic trajectory of each major character across the novel. "
"For each chapter where a character's wealth status is mentioned or implied, "
"produce a dict with keys: chapter (int), character (str), wealth (int 1-10 "
"where 1=destitute and 10=richest in Paris), and event (str, brief description "
"of what changed). Track the following characters: Dantès, Danglars, Fernand/"
"Morcerf, Villefort, and Mercédès. You need to cover each chapter in the book."
)
result = rlm(chapters=chapters, task=TASK)
wealth_data = result.wealth_data
```
#### What the RLM Does
The RLM's generated code follows a pattern typical of sub-LLM workloads:
1. **Batch the input** — Split the 117 chapters into manageable groups
2. **Fan out with `llm_query_batched()`** — For each batch, send a prompt like _"Extract wealth events from these chapters as JSON"_ — the sub-LLM calls run concurrently on the host
3. **Parse and accumulate** — Each sub-call returns a string; the code parses the JSON and appends to a running list
4. **Iterate** — Repeat for the next batch; state (the accumulated list) persists across REPL iterations
5. **Submit** — Once all chapters are processed, call `SUBMIT(wealth_data=accumulated_results)`
This is the core RLM pattern: Python handles the data plumbing (batching, parsing, aggregating) while `llm_query_batched()` handles the parts that need language understanding (reading prose, identifying wealth events, rating severity).
#### Running the Demo
```bash
python demo.py
```
The script plots the results with matplotlib after the RLM finishes.
:::tip
The demo runs up to 40 iterations and 500 sub-LLM calls. Depending on the model and provider, a full run may take several minutes and consume significant API credits.
:::
#### Results
The output is a list of `{chapter, character, wealth, event}` dictionaries that the script plots as smoothed time series:
### 5. Conclusion
RLMs combine the LLM's language understanding with Python's ability to loop, branch, and aggregate — the generated code calls the LLM whenever it needs semantic reasoning and handles everything else with ordinary computation. `DaytonaInterpreter` makes this safe to run by executing all generated code in an isolated Daytona cloud sandbox:
- **Sub-LLM recursion** — `llm_query()` and `llm_query_batched()` are bridged from the sandbox to the host, letting generated code invoke the LLM for semantic tasks like extraction, classification, and summarisation
- **Isolation** — All generated code runs in a Daytona cloud sandbox, not on your machine
- **Persistent state** — Variables, imports, and definitions survive across REPL iterations, so the LLM can accumulate results across batches
Guides for building and running Recursive Language Models with Daytona.
This guide demonstrates how to build a recursive language model (RLM) agent system that uses Daytona sandboxes, based on the approach pioneered in [Recursive Language Models](https://arxiv.org/abs/2512.24601) (Zhang, Kraska, Khattab) and further explored by [Prime Intellect](https://www.primeintellect.ai/blog/rlm).
While the original paper and Prime Intellect's implementation focus on single-level recursion (depth=1), this guide extends the concept to **unlimited recursion depth** — agents can spawn sub-agents, which can spawn their own sub-agents, and so on. Each agent runs in its own isolated Daytona sandbox with a fresh clone of the target repository.
---
### 1. Workflow Overview
The system implements a recursive agent architecture where agents can delegate subtasks to child agents:
1. **Initialize**: Root agent receives a task and gets a Daytona sandbox with a fresh repository clone
2. **Iterate**: Agent runs a loop: LLM call → extract Python code → execute in REPL
3. **Delegate**: Code can call `rlm_query()` to spawn sub-agents, each with their own sandbox
4. **Aggregate**: Sub-agents return results; parent synthesizes findings and optionally runs more code
5. **Complete**: Root agent receives all sub-agent results, produces a git patch; all sandboxes are cleaned up
```
Root Agent (depth=0)
├── Sub-Agent A (depth=1)
│ ├── Sub-Agent A1 (depth=2)
│ └── Sub-Agent A2 (depth=2)
└── Sub-Agent B (depth=1)
├── Sub-Agent B1 (depth=2)
└── Sub-Agent B2 (depth=2)
```
Each agent runs in its own isolated Daytona sandbox with a fresh repository clone, enabling parallel exploration.
### 2. Setup
#### Clone the Repository
Clone the [Daytona repository](https://github.com/daytonaio/daytona.git) and navigate to the example directory:
```bash
git clone https://github.com/daytonaio/daytona.git
cd daytona/guides/python/recursive-language-models
```
#### Create Virtual Environment
```bash
python3.10 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
```
#### Install Dependencies
```bash
pip install -e .
```
This installs:
- `daytona` - Daytona SDK for sandbox management
- `litellm` - Unified LLM interface for any provider
- `typer` - CLI framework
- `pyyaml` - Configuration parsing
#### Configure Environment
Get your Daytona API key from the [Daytona Dashboard](https://app.daytona.io/dashboard/keys) and create a `.env` file:
```bash
DAYTONA_API_KEY=your_daytona_api_key
LLM_API_KEY=your_llm_api_key
```
The `LLM_API_KEY` is used via [LiteLLM](https://docs.litellm.ai/), supporting OpenRouter, OpenAI, Anthropic, and other providers.
### 3. Running an Agent
With setup complete, let's run an agent. Here's an example that investigates TODO comments in scikit-learn:
```bash
python run.py https://github.com/scikit-learn/scikit-learn \
-p "Investigate TODO comments across this repository. Spawn sub-agents to explore different modules. Find the easiest TODO and fix it."
```
This spawns a root agent that explores the codebase, delegates to sub-agents for parallel investigation, and produces a git patch fixing the easiest TODO it finds. We'll walk through the results and trace the execution in detail later, but first, let's look at how the code works.
#### CLI Options
| Option | Description |
|--------|-------------|
| `repo` | GitHub repository URL (required) |
| `-p, --prompt` | Task prompt for the agent (required) |
| `-b, --branch` | Branch name (optional) |
| `--commit` | Specific commit SHA (optional) |
| `-c, --config` | Path to config file (default: `config.yaml`) |
| `-o, --output` | Output file for patch (default: stdout) |
### 4. Understanding the Code
Let's walk through the key components of the agent system.
#### Agent Execution Loop
Each agent runs an iteration loop that calls the LLM, extracts code blocks, and executes them. The core loop in `agent.py`:
```python
def _run_loop(self) -> None:
"""Run the main iteration loop."""
system_prompt = build_system_prompt(depth=self.depth)
messages = [{"role": "system", "content": system_prompt}]
execution_result = None
for iteration in range(self.config.rlm.max_iterations):
# Check global timeout
if self._is_timeout():
break
# Build user prompt with previous execution result
user_prompt = build_user_prompt(iteration, execution_result)
messages.append({"role": "user", "content": user_prompt})
# Get model completion
response = self.client.completion(messages)
messages.append({"role": "assistant", "content": response})
# Execute code blocks in REPL
repl_result = self.repl.execute_response(response)
# Check for final answer
if repl_result.final_answer is not None:
self._result = repl_result.final_answer
break
# Format result for next iteration
execution_result = format_execution_result(...)
```
Each iteration:
1. Builds a prompt with context from previous execution
2. Gets an LLM completion
3. Extracts and executes Python code blocks
4. Checks if the agent called `FINAL()` to submit results
5. Formats the output for the next iteration
#### Sub-Agent Spawning
When agent code calls `rlm_query()`, a new sub-agent is created with its own sandbox:
```python
def _handle_rlm_query(self, task: str) -> str:
"""Spawn a sub-agent for a specific task."""
# Check sandbox budget
if not self.sandbox_manager.budget.can_acquire():
return "Error: sandbox budget exhausted"
# Create sub-agent at depth + 1
sub_agent = RLMAgent(
client=self.client,
sandbox_manager=self.sandbox_manager,
config=self.config,
depth=self.depth + 1,
task=task,
# ... other params
)
# Run sub-agent (blocking)
result = sub_agent.run()
# Return result, truncated if necessary
return result.result or "No result"
```
For parallel spawning, `rlm_query_batched()` uses a thread pool:
```python
def _handle_rlm_query_batched(self, tasks: list[str]) -> list[str]:
"""Spawn multiple sub-agents in parallel."""
results = [""] * len(tasks)
with ThreadPoolExecutor(max_workers=10) as executor:
future_to_idx = {
executor.submit(self._handle_rlm_query, task): i
for i, task in enumerate(tasks)
}
for future in as_completed(future_to_idx):
idx = future_to_idx[future]
results[idx] = future.result()
return results
```
#### Agent Code Interface
Inside the REPL, agents have access to these functions:
| Function | Description |
|----------|-------------|
| `rlm_query(task)` | Spawn a single sub-agent, returns result string |
| `rlm_query_batched(tasks)` | Spawn multiple sub-agents in parallel |
| `FINAL(answer)` | Submit final result (root: triggers patch extraction) |
| `FINAL_VAR(var_name)` | Submit the value of a variable as result |
| `edit_file(path, old, new)` | Edit a file with syntax validation |
Example spawning pattern used by agents:
```python
# Spawn multiple sub-agents to explore different modules
results = rlm_query_batched([
"Search for TODO comments in sklearn/linear_model/ and assess difficulty",
"Search for TODO comments in sklearn/ensemble/ and assess difficulty",
"Search for TODO comments in sklearn/tree/ and assess difficulty",
])
for i, result in enumerate(results):
print(f"=== Sub-agent {i+1} findings ===")
print(result)
```
### 5. Example Walkthrough
Let's trace what happens when we run an agent on a popular machine learning library, scikit-learn:
```bash
python run.py https://github.com/scikit-learn/scikit-learn \
-p "Investigate TODO comments across this repository. Spawn sub-agents to explore different modules under sklearn/ in parallel. For each TODO found, assess how difficult it would be to fix (easy/medium/hard). After gathering results, pick the easiest TODO and fix it."
```
Note that there are about 400 lines in scikit-learn that contain the substring "# TODO".
**Step 1: Root agent explores and spawns depth-1 sub-agents**
The root agent (depth=0) examines the repository structure, identifies all sklearn modules, and spawns 25 sub-agents in parallel:
```python
# Define the subdirectories to investigate
subdirs = [
"cluster", "compose", "covariance", "cross_decomposition", "datasets",
"decomposition", "ensemble", "feature_extraction", "feature_selection",
"gaussian_process", "impute", "inspection", "linear_model", "manifold",
"metrics", "mixture", "model_selection", "neighbors", "neural_network",
"preprocessing", "semi_supervised", "svm", "tree", "utils"
]
# Create queries for sub-agents
queries = [
f"Search for 'TODO' comments in 'sklearn/{subdir}/'. For each TODO found, provide: "
f"1. The file path and line number. 2. The content of the TODO. 3. An assessment "
f"of how difficult it would be to fix (easy/medium/hard) with a brief justification."
for subdir in subdirs
]
results = rlm_query_batched(queries)
```
Each of these 25 sub-agents gets its own Daytona sandbox with a fresh clone of scikit-learn.
**Step 2: Depth-1 agents spawn depth-2 agents**
Some depth-1 agents decide their module is too large and spawn their own sub-agents. For example, the `sklearn/metrics/` agent spawned 3 depth-2 agents:
```python
# Inside the sklearn/metrics/ agent (depth=1)
# To efficiently handle the large number of TODOs, spawn sub-agents for sub-directories
tasks = [
"Identify and assess TODOs in 'sklearn/metrics/cluster/'. Provide file, line, content, and difficulty.",
"Identify and assess TODOs in 'sklearn/metrics/tests/'. Provide file, line, content, and difficulty.",
"Identify and assess TODOs in 'sklearn/metrics/_plot/' and its 'tests' sub-directory."
]
results = rlm_query_batched(tasks)
```
**Step 3: Results propagate back**
Each sub-agent returns findings via `FINAL()`. Results flow back up:
- Depth-2 → Depth-1: Detailed analysis of specific subdirectories
- Depth-1 → Root: Module-level summaries with difficulty ratings
**Step 4: Root agent synthesizes and acts**
The root agent reviews all findings, identifies the easiest TODO, and makes the fix.
**Step 5: Git patch produced**
```python
import subprocess
subprocess.run(['git', 'add', '-A'], cwd='/workspace')
result = subprocess.run(['git', 'diff', '--cached', 'HEAD'],
capture_output=True, text=True, cwd='/workspace')
FINAL(result.stdout)
```
#### Results
- Execution time: **316 seconds** (~5.3 minutes)
- Agents spawned: **40** (25 at depth 1, 15 at depth 2)
**Generated patch:**
```diff
diff --git a/sklearn/utils/_array_api.py b/sklearn/utils/_array_api.py
--- a/sklearn/utils/_array_api.py
+++ b/sklearn/utils/_array_api.py
@@ -19,8 +19,7 @@ from sklearn.externals.array_api_compat import numpy as np_compat
from sklearn.utils._dataframe import is_df_or_series
from sklearn.utils.fixes import parse_version
-# TODO: complete __all__
-__all__ = ["xpx"] # we import xpx here just to re-export it, need this to appease ruff
+__all__ = ['device', 'get_namespace', 'get_namespace_and_device', 'indexing_dtype', 'move_to', 'size', 'supported_float_dtypes', 'xpx', 'yield_namespace_device_dtype_combinations', 'yield_namespaces']
```
The agent found the easiest TODO (`# TODO: complete __all__` in `sklearn/utils/_array_api.py`) and completed the `__all__` list with all public symbols from the module.
### 6. Configuration
Configure the agent in `config.yaml`:
```yaml
# Model configuration - using LiteLLM format
model:
name: "openrouter/google/gemini-3-flash-preview"
# RLM configuration
rlm:
max_sandboxes: 50
max_iterations: 50
global_timeout: 3600
result_truncation_limit: 10000
```
| Parameter | Default | Description |
|-----------|---------|-------------|
| `model.name` | `openrouter/google/gemini-3-flash-preview` | LLM model in LiteLLM format |
| `rlm.max_sandboxes` | 50 | Maximum total sandboxes across entire rollout |
| `rlm.max_iterations` | 50 | Maximum iterations per agent |
| `rlm.global_timeout` | 3600 | Total timeout in seconds |
| `rlm.result_truncation_limit` | 10000 | Max chars in sub-agent results |
:::tip[Scaling Tips]
- Increase `max_sandboxes` for tasks requiring more parallel exploration
- The sandbox budget tracks total sandboxes created over the lifetime of the rollout
- Sub-agent sandboxes are deleted immediately after completion
:::
### 7. Viewing Results
Results are saved to the `results/` directory as JSON files. Use the built-in viewer:
```bash
python -m http.server 8000
# Open http://localhost:8000/viewer/
```
The viewer provides:
- Interactive tree visualization of the agent hierarchy
- Iteration details with code and output for each agent
- Statistics: agent count, max depth, total iterations
### 8. Conclusion
Current language models aren't specifically trained to leverage recursive delegation, so RLMs don't necessarily outperform single-agent approaches on benchmarks yet. However, the architecture demonstrates compelling properties for complex tasks.
In our scikit-learn example, 40 agents ran in parallel across the agent tree, each with its own isolated sandbox, completing the entire run in just over 5 minutes. This level of parallelism, where each agent can freely modify files, run tests, and explore without affecting others, would be difficult to achieve without per-agent sandboxes.
**Key advantages of this approach:**
- **Recursive decomposition**: Complex tasks naturally break into sub-tasks handled by specialized agents
- **Isolated execution**: Each agent gets a fresh sandbox, preventing interference
- **Parallel exploration**: `rlm_query_batched()` enables concurrent investigation
Daytona is an open-source, secure and elastic infrastructure for running AI-generated code. Daytona provides full composable computers — [sandboxes](https://www.daytona.io/docs/en/sandboxes.md) — that you can manage programmatically using the Daytona SDKs, [CLI](https://www.daytona.io/docs/en/tools/cli.md), and [API](https://www.daytona.io/docs/en/tools/api.md) to run and control code execution.
Daytona SDK is available for [Python](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby](https://www.daytona.io/docs/en/ruby-sdk.md) and [Go](https://www.daytona.io/docs/en/go-sdk.md) interfaces.
## 1. Create an account
Open the [Daytona Dashboard ↗](https://app.daytona.io/) to create your account. Daytona supports account creation using an email and password, or by connecting your Google or GitHub account.
## 2. Obtain an API key
Generate an API key from the [Daytona Dashboard ↗](https://app.daytona.io/dashboard/keys) or using the [Daytona API](https://www.daytona.io/docs/en/tools/api.md#daytona/tag/api-keys/POST/api-keys) to authenticate SDK requests and access Daytona services.
## 3. Install the SDK
Install the Daytona [Python](https://www.daytona.io/docs/python-sdk.md), [TypeScript](https://www.daytona.io/docs/typescript-sdk.md), [Ruby](https://www.daytona.io/docs/ruby-sdk.md) or [Go](https://www.daytona.io/docs/go-sdk.md) SDKs to interact with sandboxes from code.
```bash
pip install daytona
```
```bash
npm install @daytona/sdk
```
```bash
gem install daytona
```
```bash
go get github.com/daytonaio/daytona/libs/sdk-go
```
## 4. Create a Sandbox
Create a [sandbox](https://www.daytona.io/docs/en/sandboxes.md) to run your code securely in an isolated environment.
`main.py`
```python
# Import the Daytona SDK
from daytona import Daytona, DaytonaConfig
# Define the configuration
config = DaytonaConfig(api_key="YOUR_API_KEY") # Replace with your API key
# Initialize the Daytona client
daytona = Daytona(config)
# Create the Sandbox instance
sandbox = daytona.create()
```
`index.mts`
```typescript
// Import the Daytona SDK
import { Daytona } from '@daytona/sdk'
// Initialize the Daytona client
const daytona = new Daytona({ apiKey: 'YOUR_API_KEY' }) // Replace with your API key
// Create the Sandbox instance
const sandbox = await daytona.create()
```
`main.rb`
```ruby
require 'daytona'
# Initialize the Daytona client
config = Daytona::Config.new(api_key: 'YOUR_API_KEY') # Replace with your API key
# Create the Daytona client
daytona = Daytona::Daytona.new(config)
# Create the Sandbox instance
sandbox = daytona.create
```
`main.go`
```go
package main
import (
"context"
"fmt"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/types"
)
func main() {
config := &types.DaytonaConfig{
APIKey: "YOUR_API_KEY", // Replace with your API key
}
client, _ := daytona.NewClientWithConfig(config)
ctx := context.Background()
sandbox, _ := client.Create(ctx, nil)
fmt.Println(sandbox.ID)
}
```
```bash
curl https://app.daytona.io/api/sandbox \
--request POST \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer YOUR_API_KEY' \
--data '{}'
```
:::tip
Daytona supports multiple options to configure your environment: [in code](https://www.daytona.io/docs/en/configuration.md#configuration-in-code), [environment variables](https://www.daytona.io/docs/en/configuration.md#environment-variables), [.env file](https://www.daytona.io/docs/en/configuration.md#env-file), and [default values](https://www.daytona.io/docs/en/configuration.md#default-values).
:::
## 5. Write and run code
Create a program that runs code inside a sandbox. The following snippets are examples of "Hello World" programs that run securely inside a sandbox.
`main.py`
```python
# Import the Daytona SDK
from daytona import Daytona, DaytonaConfig
# Define the configuration
config = DaytonaConfig(api_key="YOUR_API_KEY") # Replace with your API key
# Initialize the Daytona client
daytona = Daytona(config)
# Create the Sandbox instance
sandbox = daytona.create()
# Run the code securely inside the Sandbox
response = sandbox.process.code_run('print("Hello World")')
# Check the response
if response.exit_code != 0:
print(f"Error: {response.exit_code} {response.result}")
else:
print(response.result)
# Clean up
sandbox.delete()
```
`index.mts`
```typescript
// Import the Daytona SDK
import { Daytona } from '@daytona/sdk'
// Initialize the Daytona client
const daytona = new Daytona({ apiKey: 'YOUR_API_KEY' }) // Replace with your API key
// Create the Sandbox instance
const sandbox = await daytona.create({
language: 'typescript',
})
// Run the code securely inside the Sandbox
const response = await sandbox.process.codeRun('console.log("Hello World")')
// Check the response
if (response.exitCode !== 0) {
console.error(`Error: ${response.exitCode} ${response.result}`)
} else {
console.log(response.result)
}
// Clean up
await sandbox.delete()
```
`main.rb`
```ruby
require 'daytona'
# Initialize the Daytona client
config = Daytona::Config.new(api_key: 'YOUR_API_KEY')
daytona = Daytona::Daytona.new(config)
# Create the Sandbox instance
sandbox = daytona.create
# Run the code securely inside the Sandbox
response = sandbox.process.code_run(code: 'print("Hello World")')
puts response.result
```
`main.go`
```go
// Import the Daytona SDK
package main
import (
"context"
"log"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/daytona"
"github.com/daytonaio/daytona/libs/sdk-go/pkg/types"
)
func main() {
// Define the configuration
config := &types.DaytonaConfig{
APIKey: "YOUR_API_KEY", // Replace with your API key
}
// Initialize the Daytona client
client, err := daytona.NewClientWithConfig(config)
if err != nil {
log.Fatal(err)
}
ctx := context.Background()
// Create the Sandbox instance
params := types.SnapshotParams{
SandboxBaseParams: types.SandboxBaseParams{
Language: types.CodeLanguagePython,
},
}
sandbox, err := client.Create(ctx, params)
if err != nil {
log.Fatal(err)
}
// Run the code securely inside the Sandbox
result, err := sandbox.Process.ExecuteCommand(ctx, `echo "Hello World"`)
// Check the response
if err != nil {
log.Fatalf("Error: %v", err)
}
if result.ExitCode != 0 {
log.Printf("Error: %d %s", result.ExitCode, result.Result)
} else {
log.Println(result.Result)
}
// Clean up
sandbox.Delete(ctx)
}
```
## Summary
By following the steps above, you successfully create a Daytona account, obtain an API key, install the SDK, create a sandbox, write code, and run it securely in a sandbox.
## Next steps
Use the following resources to interact with sandboxes:
- Learn more about Daytona with the [Getting Started](https://www.daytona.io/docs/en/getting-started.md) guide
- Get started with [Python](https://www.daytona.io/docs/en/python-sdk.md), [TypeScript](https://www.daytona.io/docs/en/typescript-sdk.md), [Ruby](https://www.daytona.io/docs/en/ruby-sdk.md) or [Go](https://www.daytona.io/docs/en/go-sdk.md) **SDKs**
- Install the [CLI](https://www.daytona.io/docs/en/getting-started.md#cli) to manage sandboxes from the command line
- Use the [API](https://www.daytona.io/docs/en/tools/api.md) to manage sandboxes programmatically
- View [examples](https://www.daytona.io/docs/en/getting-started.md#examples) for common sandbox operations and best practices
- Explore [guides](https://www.daytona.io/docs/en/guides.md) to connect Daytona with [Claude](https://www.daytona.io/docs/en/guides/claude.md), [OpenCode](https://www.daytona.io/docs/en/guides/opencode/opencode-web-agent.md), [Codex](https://www.daytona.io/docs/en/guides/codex/codex-sdk-interactive-terminal-sandbox.md), [LangChain](https://www.daytona.io/docs/en/guides/langchain/langchain-data-analysis.md) and more
:::tip
For faster development with AI agents and assistants, use our LLMs context files and agent skills. Copy the [llms-full.txt](https://www.daytona.io/docs/llms-full.txt.md) and [llms.txt](https://www.daytona.io/docs/llms.txt.md) files and include them in your projects or chat contexts, and install the [**skill**](https://github.com/daytona/skills) in your agent's skills directory to use Daytona features.
:::
## CodeInterpreter
Stateful code interpretation interface for a Sandbox.
Provides Python code execution in interpreter contexts that preserve state between runs.
### Methods
#### runCode()
```java
public ExecutionResult runCode(String code)
```
Executes Python code in the default interpreter context.
**Parameters**:
- `code` _String_ - Python code to execute
**Returns**:
- `ExecutionResult` - aggregated execution result
**Throws**:
- `DaytonaException` - if code is empty, connection fails, or execution fails
#### runCode()
```java
public ExecutionResult runCode(String code, RunCodeOptions options)
```
Executes Python code with streaming callbacks and options.
**Parameters**:
- `code` _String_ - Python code to execute
- `options` _RunCodeOptions_ - execution options including callbacks and timeout; may be `null`
**Returns**:
- `ExecutionResult` - aggregated execution result
**Throws**:
- `DaytonaException` - if code is empty, connection fails, or execution fails
#### createContext()
```java
public InterpreterContext createContext()
```
Creates a new interpreter context using sandbox defaults.
**Returns**:
- `InterpreterContext` - created interpreter context metadata
**Throws**:
- `DaytonaException` - if context creation fails
#### createContext()
```java
public InterpreterContext createContext(String cwd)
```
Creates a new interpreter context.
**Parameters**:
- `cwd` _String_ - working directory for the new context; `null` uses sandbox default
**Returns**:
- `InterpreterContext` - created interpreter context metadata
**Throws**:
- `DaytonaException` - if context creation fails
#### listContexts()
```java
public List listContexts()
```
Lists all user-created interpreter contexts.
**Returns**:
- `List\` - list of interpreter contexts; empty list when no contexts exist
**Throws**:
- `DaytonaException` - if listing contexts fails
#### deleteContext()
```java
public void deleteContext(String contextId)
```
Deletes an interpreter context.
**Parameters**:
- `contextId` _String_ - context identifier to delete
**Throws**:
- `DaytonaException` - if deletion fails
## RunCodeOptions
Options for `CodeInterpreter#runCode(String, RunCodeOptions)`.
### Constructors
#### new RunCodeOptions()
```java
public RunCodeOptions()
```
### Methods
#### getTimeout()
```java
public Integer getTimeout()
```
**Returns**:
- `Integer` -
#### setTimeout()
```java
public RunCodeOptions setTimeout(Integer timeout)
```
**Parameters**:
- `timeout` _Integer_ -
**Returns**:
- `RunCodeOptions` -
#### getOnStdout()
```java
public Consumer getOnStdout()
```
**Returns**:
- `Consumer\` -
#### setOnStdout()
```java
public RunCodeOptions setOnStdout(Consumer onStdout)
```
**Parameters**:
- `onStdout` _Consumer\_ -
**Returns**:
- `RunCodeOptions` -
#### getOnStderr()
```java
public Consumer getOnStderr()
```
**Returns**:
- `Consumer\` -
#### setOnStderr()
```java
public RunCodeOptions setOnStderr(Consumer onStderr)
```
**Parameters**:
- `onStderr` _Consumer\_ -
**Returns**:
- `RunCodeOptions` -
#### getOnError()
```java
public Consumer getOnError()
```
**Returns**:
- `Consumer\` -
#### setOnError()
```java
public RunCodeOptions setOnError(Consumer onError)
```
**Parameters**:
- `onError` _Consumer\_ -
**Returns**:
- `RunCodeOptions` -
## ComputerUse
Desktop automation operations for a Sandbox.
Provides a Java facade for computer-use features including desktop session management,
screenshots, mouse and keyboard automation, display/window inspection, and screen recording.
### Methods
#### start()
```java
public ComputerUseStartResponse start()
```
Starts the computer-use desktop stack (VNC/noVNC and related processes).
**Returns**:
- `ComputerUseStartResponse` - start response containing process status details
#### stop()
```java
public ComputerUseStopResponse stop()
```
Stops all computer-use desktop processes.
**Returns**:
- `ComputerUseStopResponse` - stop response containing process status details
#### getStatus()
```java
public ComputerUseStatusResponse getStatus()
```
Returns current computer-use status.
**Returns**:
- `ComputerUseStatusResponse` - overall computer-use status
#### takeScreenshot()
```java
public ScreenshotResponse takeScreenshot()
```
Captures a full-screen screenshot without cursor.
**Returns**:
- `ScreenshotResponse` - screenshot payload (base64 image and metadata)
#### takeScreenshot()
```java
public ScreenshotResponse takeScreenshot(boolean showCursor)
```
Captures a full-screen screenshot.
**Parameters**:
- `showCursor` _boolean_ - whether to render cursor in the screenshot
**Returns**:
- `ScreenshotResponse` - screenshot payload (base64 image and metadata)
#### takeRegionScreenshot()
```java
public ScreenshotResponse takeRegionScreenshot(int x, int y, int width, int height)
```
Captures a screenshot of a rectangular region without cursor.
**Parameters**:
- `x` _int_ - region top-left X coordinate
- `y` _int_ - region top-left Y coordinate
- `width` _int_ - region width in pixels
- `height` _int_ - region height in pixels
**Returns**:
- `ScreenshotResponse` - region screenshot payload
#### takeCompressedScreenshot()
```java
public ScreenshotResponse takeCompressedScreenshot(String format, int quality, double scale)
```
Captures a compressed full-screen screenshot.
**Parameters**:
- `format` _String_ - output image format (for example: `png`, `jpeg`, `webp`)
- `quality` _int_ - compression quality (typically 1-100, format dependent)
- `scale` _double_ - screenshot scale factor (for example: `0.5` for 50%)
**Returns**:
- `ScreenshotResponse` - compressed screenshot payload
#### click()
```java
public MouseClickResponse click(int x, int y)
```
Performs a left mouse click at the given coordinates.
**Parameters**:
- `x` _int_ - target X coordinate
- `y` _int_ - target Y coordinate
**Returns**:
- `MouseClickResponse` - click response with resulting cursor position
#### click()
```java
public MouseClickResponse click(int x, int y, String button)
```
Performs a mouse click at the given coordinates with a specific button.
**Parameters**:
- `x` _int_ - target X coordinate
- `y` _int_ - target Y coordinate
- `button` _String_ - button type (`left`, `right`, `middle`)
**Returns**:
- `MouseClickResponse` - click response with resulting cursor position
#### doubleClick()
```java
public MouseClickResponse doubleClick(int x, int y)
```
Performs a double left-click at the given coordinates.
**Parameters**:
- `x` _int_ - target X coordinate
- `y` _int_ - target Y coordinate
**Returns**:
- `MouseClickResponse` - click response with resulting cursor position
#### moveMouse()
```java
public MousePositionResponse moveMouse(int x, int y)
```
Moves the mouse cursor to the given coordinates.
**Parameters**:
- `x` _int_ - target X coordinate
- `y` _int_ - target Y coordinate
**Returns**:
- `MousePositionResponse` - new mouse position
#### getMousePosition()
```java
public MousePositionResponse getMousePosition()
```
Returns current mouse position.
**Returns**:
- `MousePositionResponse` - current mouse cursor coordinates
#### drag()
```java
public MouseDragResponse drag(int startX, int startY, int endX, int endY)
```
Drags the mouse from one point to another using the left button.
**Parameters**:
- `startX` _int_ - drag start X coordinate
- `startY` _int_ - drag start Y coordinate
- `endX` _int_ - drag end X coordinate
- `endY` _int_ - drag end Y coordinate
**Returns**:
- `MouseDragResponse` - drag response with resulting cursor position
#### scroll()
```java
public ScrollResponse scroll(int x, int y, int deltaX, int deltaY)
```
Scrolls at the given coordinates.
The current toolbox API supports directional scrolling (`up`/`down`) with an
amount. This method maps `deltaY` to vertical scroll direction and magnitude.
If `deltaY` is `0`, `deltaX` is used as a fallback.
**Parameters**:
- `x` _int_ - anchor X coordinate
- `y` _int_ - anchor Y coordinate
- `deltaX` _int_ - horizontal delta (used only when `deltaY == 0`)
- `deltaY` _int_ - vertical delta
**Returns**:
- `ScrollResponse` - scroll response indicating operation success
#### typeText()
```java
public void typeText(String text)
```
Types text using keyboard automation.
**Parameters**:
- `text` _String_ - text to type
#### pressKey()
```java
public void pressKey(String key)
```
Presses a single key.
**Parameters**:
- `key` _String_ - key to press (for example: `Enter`, `Escape`, `a`)
#### pressHotkey()
```java
public void pressHotkey(String... keys)
```
Presses a key combination as a hotkey sequence.
Keys are joined with `+` before being sent (for example,
`pressHotkey("ctrl", "shift", "t") -> "ctrl+shift+t"`).
**Parameters**:
- `keys` _String..._ - hotkey parts to combine
#### getDisplayInfo()
```java
public DisplayInfoResponse getDisplayInfo()
```
Returns display configuration information.
**Returns**:
- `DisplayInfoResponse` - display information including available displays and their geometry
#### getWindows()
```java
public WindowsResponse getWindows()
```
Returns currently open windows.
**Returns**:
- `WindowsResponse` - window list and metadata
#### startRecording()
```java
public Recording startRecording()
```
Starts a recording with default options.
**Returns**:
- `Recording` - newly started recording metadata
#### startRecording()
```java
public Recording startRecording(String label)
```
Starts a recording with an optional label.
**Parameters**:
- `label` _String_ - optional recording label
**Returns**:
- `Recording` - newly started recording metadata
#### stopRecording()
```java
public Recording stopRecording(String id)
```
Stops an active recording.
**Parameters**:
- `id` _String_ - recording identifier
**Returns**:
- `Recording` - finalized recording metadata
#### listRecordings()
```java
public ListRecordingsResponse listRecordings()
```
Lists all recordings for the current sandbox session.
**Returns**:
- `ListRecordingsResponse` - recordings list response
#### getRecording()
```java
public Recording getRecording(String id)
```
Returns metadata for a specific recording.
**Parameters**:
- `id` _String_ - recording identifier
**Returns**:
- `Recording` - recording details
#### downloadRecording()
```java
public File downloadRecording(String id)
```
Downloads a recording file.
**Parameters**:
- `id` _String_ - recording identifier
**Returns**:
- `File` - downloaded temporary/local file handle returned by the API client
#### deleteRecording()
```java
public void deleteRecording(String id)
```
Deletes a recording.
**Parameters**:
- `id` _String_ - recording identifier
## DaytonaConfig
Configuration used to initialize a `Daytona` client.
Contains API authentication settings, API endpoint URL, and the default target region used
when creating new Sandboxes.
### Methods
#### getApiKey()
```java
public String getApiKey()
```
Returns the API key used to authenticate SDK requests.
**Returns**:
- `String` - API key configured for the client
#### getApiUrl()
```java
public String getApiUrl()
```
Returns the Daytona API base URL.
**Returns**:
- `String` - API URL used for main API requests
#### getTarget()
```java
public String getTarget()
```
Returns the default target location for newly created Sandboxes.
**Returns**:
- `String` - target region identifier, or `null` if not configured
## DaytonaConfig.Builder
Builder for creating immutable `DaytonaConfig` instances.
### Methods
#### apiKey()
```java
public Builder apiKey(String apiKey)
```
Sets the API key used for authenticating SDK requests.
**Parameters**:
- `apiKey` _String_ - Daytona API key
**Returns**:
- `Builder` - this builder instance
#### apiUrl()
```java
public Builder apiUrl(String apiUrl)
```
Sets the Daytona API base URL.
**Parameters**:
- `apiUrl` _String_ - API URL to use; defaults to `https://app.daytona.io/api` when omitted
**Returns**:
- `Builder` - this builder instance
#### target()
```java
public Builder target(String target)
```
Sets the default target region for new Sandboxes.
**Parameters**:
- `target` _String_ - target location identifier
**Returns**:
- `Builder` - this builder instance
#### build()
```java
public DaytonaConfig build()
```
Builds a new immutable `DaytonaConfig`.
**Returns**:
- `DaytonaConfig` - configured `DaytonaConfig` instance
## Daytona
Main class for interacting with the Daytona API.
Provides methods to create, retrieve, and list Sandboxes, and exposes service accessors for
Snapshots and Volumes.
Implements `AutoCloseable` for deterministic HTTP resource cleanup.
**Properties**:
- `CODE_TOOLBOX_LANGUAGE_LABEL` _String_ -
### Constructors
#### new Daytona()
```java
public Daytona()
```
Creates a client using environment variables.
Reads `DAYTONA_API_KEY`, `DAYTONA_API_URL`, and `DAYTONA_TARGET`.
**Throws**:
- `DaytonaException` - if required authentication is missing
#### new Daytona()
```java
public Daytona(DaytonaConfig config)
```
Creates a client with explicit configuration.
**Parameters**:
- `config` _DaytonaConfig_ - SDK configuration containing API key and endpoint settings
**Throws**:
- `DaytonaException` - if configuration is invalid or missing credentials
### Methods
#### create()
```java
public Sandbox create()
```
Creates a Sandbox with default parameters and timeout.
**Returns**:
- `Sandbox` - created and started `Sandbox`
**Throws**:
- `DaytonaException` - if creation or startup fails
#### create()
```java
public Sandbox create(CreateSandboxFromSnapshotParams params)
```
Creates a Sandbox from snapshot-oriented parameters using default timeout.
**Parameters**:
- `params` _CreateSandboxFromSnapshotParams_ - snapshot creation parameters
**Returns**:
- `Sandbox` - created and started `Sandbox`
**Throws**:
- `DaytonaException` - if creation or startup fails
#### create()
```java
public Sandbox create(CreateSandboxFromImageParams params)
```
Creates a Sandbox from image-oriented parameters using default timeout.
**Parameters**:
- `params` _CreateSandboxFromImageParams_ - image creation parameters
**Returns**:
- `Sandbox` - created and started `Sandbox`
**Throws**:
- `DaytonaException` - if creation or startup fails
#### create()
```java
public Sandbox create(CreateSandboxFromSnapshotParams params, long timeoutSeconds)
```
Creates a Sandbox from snapshot parameters.
**Parameters**:
- `params` _CreateSandboxFromSnapshotParams_ - snapshot creation parameters including env vars, labels, and lifecycle options
- `timeoutSeconds` _long_ - maximum seconds to wait for the Sandbox to reach `started`
**Returns**:
- `Sandbox` - created and started `Sandbox`
**Throws**:
- `DaytonaException` - if creation fails or the Sandbox does not start in time
#### create()
```java
public Sandbox create(CreateSandboxFromImageParams params, long timeoutSeconds)
```
Creates a Sandbox from image parameters.
**Parameters**:
- `params` _CreateSandboxFromImageParams_ - image creation parameters including image source and optional resources
- `timeoutSeconds` _long_ - maximum seconds to wait for the Sandbox to reach `started`
**Returns**:
- `Sandbox` - created and started `Sandbox`
**Throws**:
- `DaytonaException` - if creation fails or the Sandbox does not start in time
#### create()
```java
public Sandbox create(CreateSandboxFromImageParams params, long timeoutSeconds, java.util.function.Consumer onSnapshotCreateLogs)
```
Creates a new Sandbox from a declarative image with build log streaming.
**Parameters**:
- `params` _CreateSandboxFromImageParams_ - creation parameters including the image definition
- `timeoutSeconds` _long_ - maximum seconds to wait for the Sandbox to reach `started`
- `onSnapshotCreateLogs` _java.util.function.Consumer\_ - callback for build log lines; `null` to skip streaming
**Returns**:
- `Sandbox` - created and started `Sandbox`
**Throws**:
- `DaytonaException` - if creation fails or the Sandbox does not start in time
#### get()
```java
public Sandbox get(String sandboxIdOrName)
```
Retrieves a Sandbox by ID or name.
**Parameters**:
- `sandboxIdOrName` _String_ - Sandbox identifier or name
**Returns**:
- `Sandbox` - resolved `Sandbox`
**Throws**:
- `DaytonaException` - if the Sandbox is not found or request fails
#### list()
```java
public PaginatedSandboxes list()
```
Lists Sandboxes using default pagination.
**Returns**:
- `PaginatedSandboxes` - first page of Sandboxes with default page size
**Throws**:
- `DaytonaException` - if listing fails
#### list()
```java
public PaginatedSandboxes list(Map labels, Integer page, Integer limit)
```
Lists Sandboxes with optional label filtering and pagination.
**Parameters**:
- `labels` _Map\_ - label filter map; only Sandboxes with matching labels are returned
- `page` _Integer_ - page number starting from 1
- `limit` _Integer_ - maximum items per page
**Returns**:
- `PaginatedSandboxes` - paginated Sandbox list
**Throws**:
- `DaytonaException` - if listing fails
#### snapshot()
```java
public SnapshotService snapshot()
```
Returns Snapshot management service.
**Returns**:
- `SnapshotService` - snapshot service instance
#### volume()
```java
public VolumeService volume()
```
Returns Volume management service.
**Returns**:
- `VolumeService` - volume service instance
#### close()
```java
public void close()
```
Closes this client and releases underlying HTTP resources.
## DaytonaException
Base exception for all Daytona SDK errors.
Subclasses map to specific HTTP status codes and allow callers to catch
precise failure conditions without string-parsing error messages:
```java
try {
Sandbox sandbox = daytona.sandbox().get("nonexistent-id");
} catch (DaytonaNotFoundException e) {
// sandbox does not exist
} catch (DaytonaAuthenticationException e) {
// invalid API key
} catch (DaytonaException e) {
// other SDK error
}
```
### Constructors
#### new DaytonaException()
```java
public DaytonaException(String message)
```
Creates a generic Daytona exception.
**Parameters**:
- `message` _String_ - error description
#### new DaytonaException()
```java
public DaytonaException(String message, Throwable cause)
```
Creates a generic Daytona exception with a cause.
**Parameters**:
- `message` _String_ - error description
- `cause` _Throwable_ - root cause
#### new DaytonaException()
```java
public DaytonaException(int statusCode, String message)
```
Creates a Daytona exception with explicit HTTP status code.
**Parameters**:
- `statusCode` _int_ - HTTP status code
- `message` _String_ - error description
#### new DaytonaException()
```java
public DaytonaException(int statusCode, String message, Map headers)
```
Creates a Daytona exception with HTTP status code and headers.
**Parameters**:
- `statusCode` _int_ - HTTP status code
- `message` _String_ - error description
- `headers` _Map\_ - response headers
### Methods
#### getStatusCode()
```java
public int getStatusCode()
```
Returns the HTTP status code, or 0 if not applicable.
**Returns**:
- `int` -
#### getHeaders()
```java
public Map getHeaders()
```
Returns the HTTP response headers, or an empty map if not available.
**Returns**:
- `Map\` -
## DaytonaAuthenticationException
Raised when API credentials are missing or invalid (HTTP 401).
```java
try {
daytona.sandbox().create();
} catch (DaytonaAuthenticationException e) {
System.err.println("Invalid or missing API key");
}
```
### Constructors
#### new DaytonaAuthenticationException()
```java
public DaytonaAuthenticationException(String message)
```
Creates an authentication exception.
**Parameters**:
- `message` _String_ - error description from the API
## DaytonaBadRequestException
Raised when the request is malformed or contains invalid parameters (HTTP 400).
```java
try {
daytona.sandbox().create(params);
} catch (DaytonaBadRequestException e) {
System.err.println("Invalid request parameters: " + e.getMessage());
}
```
### Constructors
#### new DaytonaBadRequestException()
```java
public DaytonaBadRequestException(String message)
```
Creates a bad-request exception.
**Parameters**:
- `message` _String_ - error description from the API
## DaytonaConflictException
Raised when an operation conflicts with the current state (HTTP 409).
Common causes: creating a resource with a name that already exists,
or performing an operation incompatible with the resource's current state.
```java
try {
daytona.snapshot().create(params);
} catch (DaytonaConflictException e) {
System.err.println("A snapshot with this name already exists");
}
```
### Constructors
#### new DaytonaConflictException()
```java
public DaytonaConflictException(String message)
```
Creates a conflict exception.
**Parameters**:
- `message` _String_ - error description from the API
## DaytonaConnectionException
Raised for network-level connection failures (no HTTP response received).
Raised when the SDK cannot reach the Daytona API due to network issues
such as DNS failure, connection refused, or TLS errors.
```java
try {
daytona.sandbox().create();
} catch (DaytonaConnectionException e) {
System.err.println("Cannot reach Daytona API: " + e.getMessage());
}
```
### Constructors
#### new DaytonaConnectionException()
```java
public DaytonaConnectionException(String message)
```
Creates a connection exception.
**Parameters**:
- `message` _String_ - connection failure description
#### new DaytonaConnectionException()
```java
public DaytonaConnectionException(String message, Throwable cause)
```
Creates a connection exception with a cause.
**Parameters**:
- `message` _String_ - connection failure description
- `cause` _Throwable_ - root cause
## DaytonaForbiddenException
Raised when the authenticated user lacks permission to perform an operation (HTTP 403).
```java
try {
daytona.sandbox().delete(sandboxId);
} catch (DaytonaForbiddenException e) {
System.err.println("Not authorized to delete this sandbox");
}
```
### Constructors
#### new DaytonaForbiddenException()
```java
public DaytonaForbiddenException(String message)
```
Creates a forbidden exception.
**Parameters**:
- `message` _String_ - error description from the API
## DaytonaNotFoundException
Raised when a requested resource does not exist (HTTP 404).
### Constructors
#### new DaytonaNotFoundException()
```java
public DaytonaNotFoundException(String message)
```
Creates a not-found exception.
**Parameters**:
- `message` _String_ - error description from the API
## DaytonaRateLimitException
Raised when API rate limits are exceeded (HTTP 429).
### Constructors
#### new DaytonaRateLimitException()
```java
public DaytonaRateLimitException(String message)
```
Creates a rate-limit exception.
**Parameters**:
- `message` _String_ - error description from the API
## DaytonaServerException
Raised for unexpected server-side failures (HTTP 5xx).
These are typically transient and safe to retry with exponential backoff.
```java
try {
daytona.sandbox().create();
} catch (DaytonaServerException e) {
System.err.println("Server error (status " + e.getStatusCode() + "), retry later");
}
```
### Constructors
#### new DaytonaServerException()
```java
public DaytonaServerException(int statusCode, String message)
```
Creates a server exception.
**Parameters**:
- `statusCode` _int_ - HTTP status code (typically 5xx)
- `message` _String_ - error description from the API
## DaytonaTimeoutException
Raised when an SDK operation times out.
This exception is generated client-side and is not tied to a single HTTP status code.
### Constructors
#### new DaytonaTimeoutException()
```java
public DaytonaTimeoutException(String message, Throwable cause)
```
Creates a timeout exception with a cause.
**Parameters**:
- `message` _String_ - timeout description
- `cause` _Throwable_ - root cause
#### new DaytonaTimeoutException()
```java
public DaytonaTimeoutException(String message)
```
Creates a timeout exception.
**Parameters**:
- `message` _String_ - timeout description
## DaytonaValidationException
Raised for semantic validation failures (HTTP 422).
Raised when the request is well-formed but the values fail business logic
validation (e.g., unsupported resource class, invalid configuration).
```java
try {
daytona.sandbox().create(params);
} catch (DaytonaValidationException e) {
System.err.println("Validation failed: " + e.getMessage());
}
```
### Constructors
#### new DaytonaValidationException()
```java
public DaytonaValidationException(String message)
```
Creates a validation exception.
**Parameters**:
- `message` _String_ - error description from the API
## FileSystem
File system operations facade for a specific Sandbox.
Provides methods for directory management, file upload/download, metadata inspection, and
search/replace operations.
### Methods
#### createFolder()
```java
public void createFolder(String path, String mode)
```
Creates a directory in the Sandbox.
**Parameters**:
- `path` _String_ - directory path
- `mode` _String_ - POSIX mode (for example `755`); defaults to `755` when `null`
**Throws**:
- `io.daytona.sdk.exception.DaytonaException` - if creation fails
#### deleteFile()
```java
public void deleteFile(String path)
```
Deletes a file.
**Parameters**:
- `path` _String_ - file path to delete
**Throws**:
- `io.daytona.sdk.exception.DaytonaException` - if deletion fails
#### downloadFile()
```java
public byte[] downloadFile(String remotePath)
```
Downloads a file into memory.
**Parameters**:
- `remotePath` _String_ - source file path in the Sandbox
**Returns**:
- `byte[]` - file bytes; empty array when no file payload is returned
**Throws**:
- `io.daytona.sdk.exception.DaytonaException` - if download or local read fails
#### uploadFile()
```java
public void uploadFile(byte[] content, String remotePath)
```
Uploads in-memory file content to a Sandbox path.
**Parameters**:
- `content` _byte[]_ - file bytes; `null` uploads an empty file
- `remotePath` _String_ - destination file path in the Sandbox
**Throws**:
- `io.daytona.sdk.exception.DaytonaException` - if upload fails
#### listFiles()
```java
public List listFiles(String path)
```
Lists files and directories under a path.
**Parameters**:
- `path` _String_ - directory path
**Returns**:
- `List\` - file metadata entries
**Throws**:
- `io.daytona.sdk.exception.DaytonaException` - if listing fails
#### getFileDetails()
```java
public FileInfo getFileDetails(String path)
```
Returns metadata for a single file or directory.
**Parameters**:
- `path` _String_ - file or directory path
**Returns**:
- `FileInfo` - metadata record
**Throws**:
- `io.daytona.sdk.exception.DaytonaException` - if lookup fails
#### findFiles()
```java
public List