Categories
AI

NemoClaw reference: DGX install, network policy, wttr.in, and troubleshooting

Reference: NemoClaw on DGX Spark — install order, non-interactive installer quirks, dashboard SSH tunnel, network policy (including wttr.in / weather), TUI vs CLI, DNS and proxy notes, and where official docs live. March 2026.

1. Install on the DGX (root, non-interactive)

Optional: ssh from your Mac, then bash if you prefer a clean shell. Replace YOUR_NVIDIA_API_KEY only on the host.

export NVIDIA_API_KEY="YOUR_NVIDIA_API_KEY"
export NEMOCLAW_NON_INTERACTIVE=1
export PATH="/root/.local/bin:$(npm config get prefix 2>/dev/null)/bin:$PATH"

curl -LsSf https://raw.githubusercontent.com/NVIDIA/OpenShell/main/install.sh | sh
grep -q '.local/bin' ~/.bashrc || echo 'export PATH="/root/.local/bin:$PATH"' >> ~/.bashrc
export PATH="/root/.local/bin:$PATH"

rm -rf /root/NemoClaw
git clone --depth 1 https://github.com/NVIDIA/NemoClaw.git /root/NemoClaw
chmod +x /root/NemoClaw/scripts/setup-spark.sh
/root/NemoClaw/scripts/setup-spark.sh

cd /root
curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash -s -- --non-interactive

export PATH="/root/.local/bin:$(npm config get prefix)/bin:$PATH"
nemoclaw --version && openshell --version

Why bash -s -- --non-interactive: the installer clears NON_INTERACTIVE at the start of main(); use the flag and/or NEMOCLAW_NON_INTERACTIVE=1. Plain curl … | bash without that often drops you into interactive onboard.

If docker ps fails after Spark setup: newgrp docker or log out and SSH in again.

2. Dashboard from a Mac

On the DGX (example sandbox my-assistant):

export PATH="/root/.local/bin:$PATH"
openshell forward start -d 18789 my-assistant

On the Mac (leave open):

ssh -i ~/.ssh/id_ed25519 -N -L 18789:127.0.0.1:18789 root@192.168.1.182

Browser: http://127.0.0.1:18789/ (add #token=… if required).

3. Official documentation (NemoClaw + OpenShell)

4. Allow a host from the TUI (UI)

export PATH="$HOME/.local/bin:$PATH"
openshell term

Navigate to pending / blocked requests for your sandbox and approve. This updates the running policy until the sandbox stops — it does not persist into the baseline YAML on disk.

5. Allow a host from the CLI (no TUI)

Example sandbox name: my-assistant. OpenShell workflow:

openshell policy get my-assistant --full > /tmp/my-assistant-policy.yaml
# Edit file: add a network_policies block (host, ports, rules, binaries)
openshell policy set my-assistant --policy /tmp/my-assistant-policy.yaml --wait
openshell policy list my-assistant

There is no documented openshell policy approve <id> — approval in the product sense is TUI or YAML + policy set.

6. Editing openclaw-sandbox.yaml (e.g. weather / wttr.in)

Baseline file (typical path after global install):

$(npm root -g)/nemoclaw/nemoclaw-blueprint/policies/openclaw-sandbox.yaml

Add under network_policies: a weather block. Prefer access: full on ports 80 and 443 (TCP passthrough / no L7 inspection) so curl through the egress proxy works without CONNECT vs plain-GET mismatches:

  weather:
    name: weather
    endpoints:
      - host: wttr.in
        port: 443
        access: full
      - host: wttr.in
        port: 80
        access: full
    binaries:
      - { path: /usr/bin/curl }

Why not protocol: rest + rules: only? If the TUI/log shows policy: weather but reason: endpoint has L7 rules; use CONNECT, the policy matched but the proxy rejected the way the request was sent (e.g. plain GET on port 80 while L7 rules expect a CONNECT tunnel). Easiest fix: use the YAML above, or drop protocol/tls/rules for this host. Prefer curl to https://wttr.in/… and, if needed, --proxy "$HTTPS_PROXY".

Editing alone does not apply the change. On the host:

openshell policy set my-assistant \
  --policy "$(npm root -g)/nemoclaw/nemoclaw-blueprint/policies/openclaw-sandbox.yaml" \
  --wait

Or run nemoclaw onboard to reapply the static baseline (may recreate the sandbox depending on the wizard).

Do you need to restart? No DGX reboot. openshell policy set … --wait hot-reloads network policy. Gateway “Healthy” is enough.

Caution: edits under node_modules/nemoclaw/… can be overwritten on the next npm install -g nemoclaw; keep a copy or patch the tree under ~/.nemoclaw/source for durability.

7. When curl still fails: EAI_AGAIN and empty output

  • getaddrinfo EAI_AGAIN wttr.in — DNS/resolver path inside the sandbox; traffic often must go through the egress proxy. Inside the sandbox check echo $HTTP_PROXY $HTTPS_PROXY. If set, try: curl -sS --proxy "$HTTPS_PROXY" "https://wttr.in/London?format=3".
  • OPA deny on wttr.in + /usr/bin/curl — allow host and binary in one block. If the log says endpoint has L7 rules; use CONNECT, switch the weather block to access: full (see §6) and reapply policy.
  • Other sites (Google, httpbin) — default baseline is deny-by-default; failure is often policy, not “no internet”.
  • ping: command not found — normal in minimal images; use curl for checks.

8. Presets and permanent baseline

nemoclaw my-assistant policy-add
nemoclaw my-assistant policy-list

Official presets include telegram, slack, discord, npm, pypi, docker, huggingface, jira, outlook. There is no dedicated “weather” preset — add wttr.in manually as above.

Static permanence: merge into openclaw-sandbox.yaml and nemoclaw onboard. Dynamic: openshell policy set — see Nemo/OpenShell docs for session scope vs baseline.

9. Uninstall (optional)

export PATH="/root/.local/bin:$(npm config get prefix 2>/dev/null)/bin:$PATH"
( nemoclaw uninstall --yes ) 2>/dev/null || true
# … remove clones, ~/.nemoclaw, openshell binary, /tmp nemoclaw files (see full runbook)

10. Related

Alpha notice and breaking changes: see NemoClaw docs and _includes/alpha-statement.md in the repo. NVIDIA API / Endpoints validation errors: check keys at build.nvidia.com or switch provider during interactive onboard.

Categories
AI

DGX Spark: NemoClaw — Easiest Install Path (OpenShell, Spark, curl, onboard, Mac access)

DGX Spark: prerequisites, step-by-step non-interactive install on the DGX, then dashboard (forward + Mac SSH tunnel). SSH login, bash, and uninstall are optional and kept in their own sections.

Prerequisites

  • Docker — daemon running; after setup-spark.sh, use newgrp docker or log out and SSH in again if docker ps fails.
  • Git and curl — for cloning NemoClaw and running the installers.
  • Node.js / npm — you do not need to install these first; nemoclaw.sh checks or installs a supported Node runtime ([1/3] in the installer).

Optional checks (run on the DGX if you like):

docker ps
git --version
curl --version

Optional: SSH from your Mac to the DGX

Skip this if you are already logged into the DGX (console or another session).

ssh -i ~/.ssh/id_ed25519 root@YOUR_DGX_LAN_IP

Omit -i … if your default SSH key is already configured for the DGX.

Optional: use bash on the DGX

Skip this if you are already in bash. If the DGX drops you into zsh and you want a predictable shell for copy-paste:

bash

Install on the DGX (as root, non-interactive)

Run the steps below in order on the DGX. Replace YOUR_NVIDIA_API_KEY only on the machine — never put real keys in posts, screenshots, or chat.

Why bash -s -- --non-interactive: with curl … | bash, the installer clears NON_INTERACTIVE; the flag and NEMOCLAW_NON_INTERACTIVE=1 keep onboarding non-interactive.

Step 1 — NVIDIA API key

export NVIDIA_API_KEY="YOUR_NVIDIA_API_KEY"

Step 2 — Non-interactive installer / onboard

export NEMOCLAW_NON_INTERACTIVE=1

Step 3 — PATH (OpenShell + npm)

export PATH="/root/.local/bin:$(npm config get prefix 2>/dev/null)/bin:$PATH"

Step 4 — Install OpenShell

OpenShell is not installed by nemoclaw.sh.

curl -LsSf https://raw.githubusercontent.com/NVIDIA/OpenShell/main/install.sh | sh

Step 5 — Persist OpenShell PATH in ~/.bashrc

grep -q '.local/bin' ~/.bashrc || echo 'export PATH="/root/.local/bin:$PATH"' >> ~/.bashrc

Step 6 — OpenShell PATH for this session

export PATH="/root/.local/bin:$PATH"

Step 7 — Remove any old NemoClaw clone

rm -rf /root/NemoClaw

Step 8 — Clone NemoClaw (for the Spark script only)

git clone --depth 1 https://github.com/NVIDIA/NemoClaw.git /root/NemoClaw

Step 9 — Spark Docker fix — make the script executable

chmod +x /root/NemoClaw/scripts/setup-spark.sh

Step 10 — Run the Spark Docker fix

/root/NemoClaw/scripts/setup-spark.sh

If docker then fails with permission errors: run newgrp docker once, or log out and SSH in again.

Step 11 — Install NemoClaw from /root

Stay out of /root/NemoClaw for this step so the installer uses the release tree under ~/.nemoclaw/source.

cd /root
curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash -s -- --non-interactive

Step 12 — Refresh PATH after install

export PATH="/root/.local/bin:$(npm config get prefix)/bin:$PATH"

Step 13 — Confirm nemoclaw is on PATH

command -v nemoclaw

Step 14 — Versions

nemoclaw --version
openshell --version

NVIDIA Endpoints errors (e.g. 404 / timeout) during onboard usually mean API key, account, or outbound access — check build.nvidia.com, or run interactive nemoclaw onboard once and pick another provider (e.g. OpenAI).

Dashboard (after install checks)

The UI listens inside the sandbox; start a forward on the DGX, then tunnel from the Mac where the browser runs.

Step 1 — On the DGX: PATH

export PATH="/root/.local/bin:$PATH"

Step 2 — On the DGX: start the forward

openshell forward start -d 18789 my-assistant

Step 3 — On your Mac: SSH tunnel (example DGX 192.168.1.182)

Leave this terminal open.

ssh -i ~/.ssh/id_ed25519 -N -L 18789:127.0.0.1:18789 root@192.168.1.182

Step 4 — On your Mac: tunnel with your DGX address

ssh -i ~/.ssh/id_ed25519 -N -L 18789:127.0.0.1:18789 root@YOUR_DGX_LAN_IP

Step 5 — On your Mac: tunnel without -i (default SSH key)

ssh -N -L 18789:127.0.0.1:18789 root@YOUR_DGX_LAN_IP

Step 6 — Browser on the Mac

Open http://127.0.0.1:18789/. Add #token=… if your setup shows a token (from config inside the sandbox). Prefer 127.0.0.1 over localhost if origin checks are strict.

Optional: user openclaw

State under /home/openclaw. Run setup-spark.sh once as root first. Then as openclaw: same OpenShell install, cd "$HOME" before nemoclaw.sh, same bash -s -- --non-interactive, and PATH including $HOME/.local/bin and the npm prefix.

Checks (optional)

docker ps
nvidia-smi
docker run --rm --runtime=nvidia --gpus all ubuntu:22.04 nvidia-smi

Local Ollama (optional)

See spark-install.md. Use OLLAMA_HOST=0.0.0.0 only on a trusted LAN.

Remote access

Private LAN addresses work on the same network or VPN. The Mac that runs the browser must run ssh -L.

Links

Publishing with PraisonAIWP

praisonaiwp update POST_ID --post-content "$(cat article.html)" --category "AI" --server default

Optional: uninstall (only to remove NemoClaw / this setup)

On the DGX as root. Skip entirely for a first-time install.

Uninstall step 1 — PATH

export PATH="/root/.local/bin:$(npm config get prefix 2>/dev/null)/bin:$PATH"

Uninstall step 2 — NemoClaw uninstaller

( nemoclaw uninstall --yes ) 2>/dev/null || true

Uninstall step 3 — Global npm package

npm uninstall -g nemoclaw 2>/dev/null || true

Uninstall step 4 — CLI symlinks

rm -f /usr/bin/nemoclaw /usr/local/bin/nemoclaw 2>/dev/null || true

Uninstall step 5 — Config and clone

rm -rf /root/NemoClaw /root/.nemoclaw /root/.config/openshell 2>/dev/null || true

Uninstall step 6 — Installer leftovers in /tmp

rm -f /tmp/nemoclaw.sh /tmp/nemoclaw-install.sh 2>/dev/null || true
find /tmp -maxdepth 1 -name 'nemoclaw-build-*' -exec rm -rf {} + 2>/dev/null || true

Uninstall step 7 — OpenShell binary (this install path)

rm -f /root/.local/bin/openshell 2>/dev/null || true

Optional: remove all unused Docker data

Removes all unused images/containers on the host — skip if you need other Docker workloads.

docker system prune -af

Optional: full reinstall

Run every block under Optional: uninstall, then repeat Install on the DGX and Dashboard above.

Categories
DevOps

Infrastructure Usage Analytics

Infrastructure Usage
Analytics

Cloud Platform Overview · Rolling 13-Month Window
✦ Redacted Public Version
Summary Dashboard · Sanitised
Overall Trend
Rising
Latest Period
Higher than prior
Traffic Layer Share
Largest category
This version removes provider-specific, regional, pricing, and instance-level details while preserving the overall dashboard structure and trend visibility.
Spend Direction
Upward
13-month view
↑ continuing increase
Current Run Rate
High
latest period
↑ above earlier baseline
Period Growth
Strong
start → latest
Traffic Layer
#1
largest service group
↑ increased importance
Peak Day Trend
New Highs
recent period
↑ above prior peaks
Direct Egress
Lower
relative trend
↓ offset elsewhere
Full Period · Daily View
Daily Spend Trend
Month-over-Month
Monthly Cost Index
Growth Rate
MoM Change (%)
Cost Distribution
By Service Group
Ranked by Relative Spend
All Service Groups
Service Composition Over Time
Monthly Spend by Service Group
Regional Distribution
Spend by Zone Group
Top Resource Types
Resource Group Breakdown
What Changed — High Level
Early PeriodBaselineLow-to-moderate baseline
Core platform established
The environment began with a standard production stack including compute, storage, network routing, and image distribution. Costs were concentrated in core infrastructure categories.
ComputePersistent StorageTraffic RoutingImage Distribution
Mid PeriodSteady StateStable operating range
Gradual growth with no major architecture shifts
Usage grew steadily while the platform remained structurally similar. Normal month-to-month fluctuations were driven mainly by load and retained data.
Stable fleetOrganic traffic growthPredictable trend
Growth EventSpikeTemporary surge
Short-lived increase in workload intensity
A brief cost surge suggests temporary scaling, migration, or burst activity. The pattern later returned closer to the prior operating range.
Higher computeHigher storage activityReturned to trend
Service AdditionNew CapabilityAdditional control-plane services
Security and management services expanded
The platform introduced additional supporting services for secrets, observability, and background processing, indicating a more mature production posture.
Secrets / certificatesObservabilityBackground processing
Recent PeriodNew DataHigher run rate than before
Traffic handling layer became dominant
A front-line traffic and protection layer now absorbs a large portion of cost that had previously appeared elsewhere. Direct egress dropped, but total traffic-layer cost became more prominent.
Lower direct egressHigher traffic-layer costRising daily peaks
🚨
Immediate — Do Now
Next 1–2 weeks
  • 1
    Review traffic-layer request and policy efficiencyAudit request volume, health checks, caching coverage, and security rule complexity in the external traffic layer. Focus on waste reduction rather than functional changes.Highest priority
  • 2
    Set budget and anomaly alertsConfigure threshold alerts and notifications so cost acceleration is detected early rather than after monthly close.Preventative
  • 3
    Validate compute growthReview recently added compute categories and verify there are no forgotten or underutilised workloads remaining active.Potential reduction
  • 4
    Investigate anomalous low-cost daysConfirm whether these represent billing boundaries, reporting gaps, or real workload interruptions.Operational risk
⚙️
Medium-Term — 1–3 Months
Efficiency improvements
  • 1
    Right-size persistent storage tiersMeasure actual utilisation and move suitable volumes or objects to lower-cost tiers where performance requirements allow.Likely win
  • 2
    Use committed pricing where workloads are stableFor continuously running compute, evaluate reservations or savings plans to reduce baseline cost.Recurring savings
  • 3
    Increase cache effectivenessExtend cache rules and review cache hit ratio for eligible assets to reduce repeated processing in the traffic layer.Traffic-layer efficiency
  • 4
    Apply lifecycle policies to object storageAutomatically move colder data into lower-cost storage classes based on age and access frequency.Storage savings
🎯
Long-Term — 3–12 Months
Governance and architecture
  • 1
    Re-evaluate the edge architectureCompare the current front-line traffic design with alternatives to ensure the present cost profile still makes sense for actual traffic shape and geography.Architecture-level savings
  • 2
    Control compute growth with policyIntroduce approval paths, budget caps, and automated recommendations to limit unplanned compute expansion.Ongoing control
  • 3
    Enforce tagging and ownershipRequire cost-centre, owner, environment, and product tags so accountability is clear across all workloads.Governance foundation
  • 4
    Adopt a recurring FinOps reviewUse a weekly cost review and monthly waste-reduction plan to keep spend from drifting upward unchecked.Ongoing optimisation
🆕
Traffic Costs Shifted Layers
Direct egress fell sharply while front-line traffic handling became the dominant category. This suggests an architectural shift rather than an outright removal of cost.
📊
One Service Group Now Dominates
A single service group now accounts for the largest share of cost, making it the most important optimisation target.
📈
Daily Peaks Keep Rising
Recent peaks are meaningfully above the earlier baseline, indicating that the environment is still on an upward trajectory.
🖥️
Compute Trend Needs Review
Compute has increased alongside broader platform growth and should be checked for efficiency and forgotten workloads.
💾
Storage Remains Significant
Persistent storage continues to represent a major share and is a good candidate for tiering and right-sizing work.
⚠️
Anomalies Were Detected
A few isolated low-cost days appear in the series and should be explained to distinguish reporting quirks from service interruptions.
All Service Groups — Relative Breakdown
Categories
PraisonAI

PraisonAI is Safe from LiteLLM Supply Chain Incident (March 2026)

On March 24, 2026, LiteLLM disclosed a suspected supply chain attack affecting PyPI packages litellm==1.82.7 and litellm==1.82.8. These versions contained a credential stealer targeting environment variables, SSH keys, cloud provider credentials, and Kubernetes tokens. PraisonAI users are not affected.

What Happened

A compromised Trivy dependency in LiteLLM’s CI/CD pipeline allowed an attacker to bypass official workflows and publish two malicious packages directly to PyPI. The payload in proxy_server.py and litellm_init.pth harvested secrets and exfiltrated them to an unauthorized domain. Both versions have been removed from PyPI, maintainer credentials rotated, and Google’s Mandiant team engaged for forensic analysis.

Why PraisonAI Is Safe

CheckStatusDetail
Compromised versions installed✅ NoPraisonAI pins litellm>=1.81.0; current installed: 1.81.1
Malicious file litellm_init.pth✅ Not foundIoC scan of site-packages clean
Outbound traffic to models.litellm[.]cloud✅ NoneNo exfiltration observed
Compromised packages on PyPI✅ Removed1.82.7 and 1.82.8 pulled by PyPI

Our Response

  • Verified safe: Full IoC checks confirmed no compromise across PraisonAI infrastructure
  • Dependency audit: Confirmed all three pyproject.toml files pin LiteLLM at safe versions
  • Defensive pinning: Evaluating upper-bound version caps until LiteLLM completes their supply-chain review
  • Monitoring: Tracking LiteLLM’s ongoing investigation and Mandiant forensics for any new findings

What You Should Do

If you are using PraisonAI with the default dependency pins, no action is required. If you independently installed litellm outside of PraisonAI and may have received version 1.82.7 or 1.82.8 on March 24, follow the LiteLLM remediation steps immediately: rotate all secrets, inspect your filesystem for litellm_init.pth, and audit your version history.

Timeline

Time (UTC)Event
Mar 24, 10:39Compromised litellm==1.82.7 published to PyPI
Mar 24, ~14:00Compromised litellm==1.82.8 published
Mar 24, 16:00Both versions removed from PyPI
Mar 24, 14:00 ETLiteLLM publishes security advisory
Mar 26PraisonAI confirms safe — full IoC scan clean

Security is a core principle of PraisonAI. We follow protocol-driven, lazy-loading architecture with carefully managed dependency boundaries. If you have questions, reach out via GitHub Issues.

Categories
PraisonAI

PraisonAI Design Philosophy: Few Lines, Full Power

PraisonAI is built on one sentence: few lines of code to do the task — powerful, lightweight, safe by default.

Architecture at a Glance

PraisonAI design philosophy diagram showing core philosophy, engineering principles, architecture layers, and canonical paths

The Hard Boundary Rule

LayerPackageWhat lives hereWhat is forbidden
Core SDKpraisonaiagentsProtocols, hooks, adapters, base classesHeavy imports, real DB calls, CLI
WrapperpraisonaiCLI, integrations, DBs, observabilityCore logic, re-implementing protocols
Toolspraisonai-toolsPluggable tools, community extensionsBloating core or wrapper

Five Non-Negotiables

  1. DRY — no duplication across layers. One source of truth per concept.
  2. No perf impact — lazy imports everywhere. No module-level heavy work, no global singletons. Target: under 200ms package import.
  3. TDD first — write a failing test before any implementation line.
  4. Async + multi-agent safe by default — all I/O has async variants; no shared mutable state between agents.
  5. Every feature ships 3 ways — Python API, CLI command, YAML config.

Simple API in Practice

The philosophy is best understood in code. Three lines for a basic agent, five for tools, ten for full memory and hooks:

# Level 1: minimal
from praisonaiagents import Agent
agent = Agent(name="assistant", instructions="Be helpful")
response = agent.start("Hello!")

# Level 2: with tools
from praisonaiagents import Agent, tool

@tool
def search(query: str) -> list:
    """Search the web."""
    return [{"result": query}]

agent = Agent(name="researcher", tools=[search])

# Level 3: full control
from praisonaiagents import Agent, MemoryConfig, ExecutionConfig
agent = Agent(
    name="coder",
    llm="gpt-4o-mini",
    memory=MemoryConfig(use_long_term=True),
    execution=ExecutionConfig(code_execution=True),
)

Extension Points

ExtensionFilePattern
Custom tooltools/base.py@tool decorator or BaseTool subclass
Custom DB adapterdb/*DbAdapter / AsyncDbAdapter
Custom hookhooks/@before_tool / @after_tool
Custom memorymemory/protocols.pyImplement MemoryProtocol

The architecture enforces these rules at the boundary level — the core SDK physically cannot contain heavy imports, and every new feature must pass through a failing test before a single implementation line is written.

Categories
PraisonAI

How PraisonAI Agents Avoid Getting Stuck: Hook-Based Loop Detection and Context Compaction

Modern AI agents can fall into infinite loops or exhaust their context window. PraisonAI now ships two production-grade safeguards wired into the agent hook system.

Agent Hook Flow

%%{init: {"theme": "base", "themeVariables": {"background": "transparent", "lineColor": "#000000"}}}%%
graph TD
    A[Agent.start] --> B[_achat_impl]
    B --> C{Tool call?}
    C -->|yes| D[BEFORE_TOOL hook]
    D --> E{Loop detected?}
    E -->|warning| F[Log warning, continue]
    E -->|critical| G[Block tool]
    E -->|no| H[Execute tool]
    H --> I[AFTER_TOOL hook]
    I --> B
    B --> J[LLM call]
    J --> K{context_compaction?}
    K -->|yes| L[BEFORE_COMPACTION]
    L --> M[ContextCompactor.compact]
    M --> N[AFTER_COMPACTION]
    N --> O[Send to LLM]
    K -->|no| O

    classDef hook fill:#189AB4,color:#fff
    classDef agent fill:#8B0000,color:#fff
    classDef decision fill:#444,color:#fff

    class D,I,L,N hook
    class A,B,H,O agent
    class C,E,K decision

1. Tool Loop Detection

Three detectors, stdlib-only, disabled by default:

DetectorFires when
generic_repeatSame (tool, args) called ≥ N times
poll_no_progressPoll tool returns identical result N times in a row
ping_pongAgent alternates between exactly 2 tool states
# One import activates the BEFORE_TOOL hook
import praisonaiagents.plugins.loop_detection_plugin

agent = Agent(name="researcher", tools=[search, fetch])
result = agent.start("Research quantum computing trends")

2. Context Window Compaction

Automatically trims chat_history before hitting the token limit. Wired into all three LLM paths (sync, async custom LLM, async OpenAI):

from praisonaiagents import Agent, ExecutionConfig

agent = Agent(
    name="researcher",
    execution=ExecutionConfig(
        context_compaction=True,
        max_context_tokens=8000
    )
)

Performance

MetricValue
Import time27ms
Per-tool overhead (no hooks)0ms
Per-tool overhead (with plugin)~1μs
New dependencies0 — stdlib only

Both features are opt-in, non-fatal, and multi-agent safe (thread-local history).

Categories
claw

PraisonAI Scheduler and Background: Sync Wrappers and ScheduleLoop

Background Tasks from Sync Code

Use submit_sync() and submit_agent_sync() to run functions or agent tasks in the background from plain Python — no asyncio needed.

submit_sync(func, args, kwargs, name, timeout, on_complete) → BackgroundTask

from praisonaiagents.background.runner import BackgroundRunner

runner = BackgroundRunner()
task = runner.submit_sync(func=my_heavy_computation, args=(data,), name="compute")
print(task.status)      # "running"
print(task.result)     # when done
print(task.is_successful)

submit_agent_sync(agent, prompt, name, timeout, on_complete) → BackgroundTask

from praisonaiagents import Agent
from praisonaiagents.background.runner import BackgroundRunner

agent = Agent(name="researcher", instructions="Research AI trends")
runner = BackgroundRunner()
task = runner.submit_agent_sync(agent, "What are top AI trends?", name="research")

Scheduled Jobs with ScheduleLoop

Use ScheduleLoop to poll for due jobs and run your callback. Jobs are stored in ~/.praisonai/schedules/.

ScheduleLoop(on_trigger, store=None, tick_seconds=30)on_trigger is called with each due ScheduleJob. Methods: loop.start(), loop.stop(timeout=5.0), loop.is_running.

End-to-End Example

Wire an agent with schedule tools so user requests like “Remind me to check email every morning” run on schedule:

from praisonaiagents import Agent
from praisonaiagents.tools import schedule_add, schedule_list, schedule_remove
from praisonaiagents.scheduler import ScheduleLoop
from praisonaiagents.background.runner import BackgroundRunner

agent = Agent(
    name="assistant",
    instructions="You can set reminders and schedules.",
    tools=[schedule_add, schedule_list, schedule_remove],
)
runner = BackgroundRunner()

def handle_scheduled_job(job):
    print(f"🔔 Firing: {job.name} — {job.message}")
    runner.submit_agent_sync(agent, job.message, name=f"schedule-{job.name}")

loop = ScheduleLoop(on_trigger=handle_scheduled_job, tick_seconds=30)
loop.start()

agent.start("Remind me to check email every morning at 7am")

Usage Recipes

Simple background task:

task = runner.submit_sync(func=heavy_computation, args=([1,2,3,4,5],), name="compute")
while not task.is_completed:
    time.sleep(1)
print(task.result)

Schedule-driven notifications: In on_trigger, call your notification logic (e.g. requests.post(SLACK_WEBHOOK, ...)). Add schedules via LLM tools or programmatically with FileScheduleStore and ScheduleJob.

Bot integration: Start ScheduleLoop alongside your bot. In on_trigger, use submit_agent_sync and optionally route results to a channel via job.delivery.

Categories
claw

BotOS: Turn Your AI Agent into a Messaging Bot

What is BotOS?

BotOS turns your AI agent into a messaging bot. Users chat with it on Telegram, Discord, Slack, or WhatsApp — no app to install, no website to visit.

Your agent is the brain; the bot is the messenger. Messages flow: User → Messaging app → Bot → Agent → back to the user.

Quick Start with CLI

Get a bot token (e.g. from Telegram @BotFather), then:

export TELEGRAM_BOT_TOKEN="your-token"
praisonai bot telegram --token $TELEGRAM_BOT_TOKEN

Same pattern for Discord and Slack. A default AI assistant is created automatically.

Which Platform?

Telegram: easiest, great for personal bots. Discord: communities and dev groups. Slack: teams and workplaces. WhatsApp: business, needs more setup.

Add Capabilities

Enable memory, web search, or knowledge with flags:

praisonai bot telegram --token $TOKEN --memory --web

Full docs: docs.praison.ai/docs/concepts/bot-os

Categories
claw

AgentOS: Deploy AI Agents as Production APIs

What is AgentOS?

AgentOS turns your AI agents into production web services. You get ready-made API routes, health checks, and docs — no custom server code needed.

Quick Start

Install:

pip install praisonai[os]

Create a Python file:

from praisonai import AgentOS
from praisonaiagents import Agent

app = AgentOS(agents=[
    Agent(instructions="You are a helpful assistant")
])
app.serve(port=8080)

Or use the CLI:

praisonai app --port 8080

You get endpoints at /, /health, /api/agents, /api/chat, and /docs.

What You Get

  • POST /api/chat — chat with your agent
  • GET /health — health check for deployments
  • GET /docs — auto-generated API documentation
  • CORS, WebSocket support, and configurable workers

Full docs: docs.praison.ai/docs/concepts/agentos

Categories
claw

PraisonAI Claw: Your Complete AI Assistant in One Install

What is PraisonAI Claw?

PraisonAI Claw gives you a complete AI assistant in one install: a web dashboard with chat, agents, memory, knowledge, and bots for Telegram, Discord, and Slack.

Quick Start — Dashboard

(1) Install Claw:

pip install "praisonai[claw]"

(2) Set your API key:

export OPENAI_API_KEY="your-api-key-here"

(3) Launch the dashboard — no app.py needed:

praisonai claw

Open http://localhost:8082 — full dashboard with 13+ pages and 3 built-in agents (Researcher, Writer, Coder).

CLI options: --port 9000 --host 127.0.0.1 --app my-dashboard.py --reload

YAML Chat Mode

For a quick chat agent without Python, create chat.yaml and run:

aiui run chat.yaml

Opens at http://localhost:8000.

What’s included

With one install you get a full web dashboard, chat, 100+ tools, web search, and bot adapters for Telegram, Discord, and Slack. The default app is written to ~/.praisonai/claw/app.py on first run — you can edit it to customize.

%%{init: {'theme': 'base', 'themeVariables': {'background': 'transparent', 'lineColor': '#000000'}}}%%
graph LR
    A["pip install"] --> B["praisonai claw"]
    B --> C["Dashboard :8082"]
    A --> D["chat.yaml"]
    D --> E["aiui run"]
    E --> F["Chat :8000"]
    style A fill:#1e40af,color:#fff
    style B fill:#7c3aed,color:#fff
    style C fill:#2d5016,color:#fff
    style D fill:#7c3aed,color:#fff
    style E fill:#2d5016,color:#fff
    style F fill:#4a9eff,color:#fff

Dashboard and bots

Claw runs the AIUI dashboard with sidebar pages for chat, channels, agents, skills, memory, knowledge, cron, guardrails, sessions, usage, config, logs, and debug. Connect to Telegram, Discord, or Slack — each bot responds to /status, /new, and /help. Docker: docker run -p 8082:8082 -e OPENAI_API_KEY=sk-xxx praisonai:claw.

Full docs: docs.praison.ai/docs/concepts/claw