hugging-face-jobs

This skill should be used when users want to run any workload on Hugging Face Jobs infrastructure. Covers UV scripts, Docker-based jobs, hardware selection, cost estimation, authentication with tokens, secrets management, timeout configuration, and result persistence. Designed for general-purpose compute workloads including data processing, inference, experiments, batch jobs, and any Python-based tasks. Should be invoked for tasks involving cloud compute, GPU workloads, or when users mention running jobs on Hugging Face infrastructure without local setup.

Author

Install

Hot:0

Download and extract to your skills directory

Copy command and send to OpenClaw for auto-install:

Download and install this skill https://openskills.cc/api/download?slug=sickn33-skills-hugging-face-jobs&locale=en&source=copy

Hugging Face Jobs - Cloud Machine Learning Job Scheduling Platform

Overview of Skills


Hugging Face Jobs is a managed compute platform that lets you run any workload in the cloud without any local setup. It supports CPU, GPU, and TPU hardware, and can persist results to the Hugging Face Hub.

Use Cases

1. Data Processing and Batch Inference


Process large-scale datasets without needing local compute. Whether converting and filtering data or running batch inference over thousands of samples, tasks can be completed efficiently in the cloud. Supports streaming to avoid downloading entire datasets.

2. Machine Learning Experiments and Training


Run reproducible ML experiments and benchmarks. Test code without a local GPU, or fine-tune models using cloud GPU/TPU. Supports checkpointing and resuming from interruptions.

3. Scheduled Jobs and Automation


Create scheduled jobs with CRON expressions to automatically run data processing, model inference, or report generation hourly, daily, or on a custom schedule. Use Webhooks to trigger jobs automatically when repository changes occur.

Core Features

UV Script Support


Use UV scripts with inline dependency declarations via PEP 723, with no additional configuration files required. Submit Python code directly, and dependencies are automatically installed. Supports specifying a custom Python version and additional runtime dependencies.

Flexible Hardware Selection


Choose the right hardware configuration—from lightweight CPUs to high-end GPU/TPU—based on your needs. Includes GPU options such as T4, L4, A10G, and A100, as well as multi-GPU parallel configurations. Billed hourly—pay for what you use.

Full Job Lifecycle Management


After submitting a job, you can monitor status in real time, view logs, and cancel running tasks. Scheduled jobs support pause, resume, and deletion. Results can be automatically pushed to the Hugging Face Hub and support both private and public repositories.

Frequently Asked Questions

Is Hugging Face Jobs free?


Hugging Face Jobs requires a paid Pro, Team, or Enterprise plan to use. There is no free tier. Billing is based on actual usage duration: CPU basic configuration is about $0.10/hour, and GPU ranges from about $1–10+/hour depending on the type.

How do I choose the right hardware configuration?


For lightweight tasks like data processing and testing, use cpu-basic or cpu-upgrade. For small models (<1B parameters), use t4-small. For medium models (1–7B), t4-medium or l4x1 is recommended. For large models (7–13B), use a10g-small or a10g-large. For extremely large models or high-throughput scenarios, choose a100-large or a multi-GPU configuration. TPU is suitable for JAX/Flax workloads.

Will results be lost after the job finishes?


Yes. The Jobs environment is temporary, and all files are deleted when the job ends. You must persist results proactively. The recommended approach is to push to the Hugging Face Hub (requires adding the HF_TOKEN secret in the job configuration). You can also use external storage such as S3/GCS, or send results to your own service via API.