hugging-face-cli
Execute Hugging Face Hub operations using the `hf` CLI. Use when the user needs to download models/datasets/spaces, upload files to Hub repositories, create repos, manage local cache, or run compute jobs on HF infrastructure. Covers authentication, file transfers, repository creation, cache operations, and cloud compute.
Author
Category
Development ToolsInstall
Hot:0
Download and extract to your skills directory
Copy command and send to OpenClaw for auto-install:
Download and install this skill https://openskills.cc/api/download?slug=sickn33-skills-hugging-face-cli&locale=en&source=copy
Hugging Face CLI - HF Model Management Command-Line Tool
Skills Overview
Hugging Face CLI (the
hf command) provides terminal access to the Hugging Face Hub, supporting downloading and uploading of models, datasets, and Spaces; repository management; local cache operations; and cloud GPU compute jobs.Use Cases
1. Getting Models and Datasets
Quickly download pre-trained models from the Hugging Face Hub to a local directory, or use a caching mechanism to efficiently manage multiple model versions—especially suitable for development scenarios that require offline deployment or local inference.
2. Model Publishing and Collaboration
Create public or private repositories, upload trained model weights and configuration files, manage version tags, and share AI成果 with your team—without switching to a web interface to complete the full publishing workflow.
3. Cloud Computing Jobs
Submit GPU compute jobs directly from the terminal, choose the right GPU configuration (e.g., A10G, H100), and deploy inference endpoints—so you can run models in the cloud without writing additional deployment code.
Core Features
Model Transfer Management
Use
hf download to download an entire repository or specific files to local storage, with support for file-pattern filtering and selecting version branches. Use hf upload to upload a single file or an entire directory—you can customize the commit message or create a Pull Request.Repository and Cache Operations
Manage model, dataset, and Spaces repositories with
hf repo create/delete/move. Use hf cache ls/prune/verify to view and clean local caches to effectively control disk usage.Browsing Hub Resources and Computing
Use
hf models/datasets/spaces ls/info to browse and search Hub resources. Submit cloud GPU jobs with hf jobs run. Manage inference endpoint deployment, scaling, and lifecycle with hf endpoints deploy.Frequently Asked Questions
How do I use the hf CLI to download a Hugging Face model?
Run
hf download <repo_id> to download the full model to the cache directory, or add the --local-dir parameter to specify the output path. For example: hf download meta-llama/Llama-3.2-1B-Instruct --local-dir ./model.Does uploading with hf commands require authentication?
Yes. Upload operations require authentication first. Run
hf auth login for interactive login, or pass the token via --token $HF_TOKEN. After authentication, you can upload files using hf upload <repo_id> <local_path> <target_path>.How do I clean up the local Hugging Face cache?
Use
hf cache ls to view all cache contents and their disk usage. Run hf cache rm <repo_or_revision> to delete a specific cached item, or execute hf cache prune to clean all unreferenced versions and free up disk space.